report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
INL manages its aviation fleet in a decentralized manner. The Air Wing manages about two-thirds of INL’s 357 aircraft, while the NAS at four embassies manages the remainder. Figure 1 shows the distribution of INL supported aircraft worldwide. The Air Wing is responsible for assisting host nations eradicate illicit drug crops and detect, monitor, and interdict drug production and trafficking operations. In Colombia, it also assists the Colombian Army with counterterrorism operations. To accomplish these missions, through its contract with DynCorp International, the Air Wing uses an active fleet of 179 aircraft, including helicopters and fixed-wing airplanes, to undertake aerial eradication in Colombia; support manual eradication of drug crops in Afghanistan, Bolivia, and Peru; and enhance border security between Afghanistan and Pakistan. Operations often take place in hostile environments, which can place aircraft and personnel under small arms fire. These programs are managed by the Air Wing headquarters office at Patrick Air Force Base in Florida. As the aircraft program’s contractor, DynCorp performs major maintenance and initial pilot training at Patrick Air Force Base and flies and maintains U.S. aircraft and trains foreign personnel at various locations in Afghanistan, Bolivia, Colombia, Peru and Pakistan. Training for some of the spray aircraft is also conducted at Kirtland Air Force Base in New Mexico, a location that helps simulate the mountainous environment of Colombia. In addition, the NAS offices within the U.S. embassies in Colombia, Ecuador, Mexico, and Peru manage a total of 98 aircraft to support a variety of host government counternarcotics efforts, with the involvement and oversight of the INL Office of Latin American Programs. For example, through contracts with ARINC and Lockheed Martin, the NAS in Colombia provides aircraft to assist the (1) Colombian Air Force in interdicting suspicious aircraft and (2) Colombian National Police in conducting aerial eradication and interdiction operations, humanitarian missions, and other activities. The NAS in Mexico provides both new and older model U.S. government-owned helicopters to Mexico’s Office of Attorney General for use in counternarcotics operations, including aerial surveillance, border security, and training. INL also funds ARINC to assist the Government of Mexico in maintaining these aircraft. INL supports a wide variety of rotary and fixed-wing aircraft. Some are excess defense aircraft that have been refurbished, while others were purchased for use in INL programs. Figure 2 depicts examples of types of aircraft owned and supported by INL. Most of the funds used to support INL’s aviation fleet come from two annual appropriations—the International Narcotics Control and Law Enforcement and Andean Counterdrug Initiative—and supplemental appropriations in some years. During fiscal year 2002 through 2006, INL records indicate that it allocated about $2.2 billion for its aviation activities. Table 1 shows a breakdown of the total amount allocated for aviation activities by fiscal year and appropriation. INL allocates its aircraft funds to a NAS for some country programs and to the Air Wing. INL aircraft funding is also embedded in various program budgets, such as the Air Bridge Denial Program in Colombia. These program funds are primarily used to pay for three aviation support contractors that repair and maintain the aircraft, train aircraft crews and mechanics and, in some instances, fly the aircraft. OMB provides the following guidance for State and other agencies to follow in managing capital acquisitions, including aviation programs: Circular No. A-126, which is intended to minimize cost and improve the management and use of governmental aviation resources, prescribes policies for acquiring, managing, using, accounting for the costs of, and disposing of aircraft. According to the circular, agencies should not have more aircraft than they need to fulfill their mission, and they should periodically review the cost effectiveness of their entire fleet of owned aircraft. Circular No. A-76 establishes policy for the competition and contracting out of commercial activities, including the use of aircraft, and provides guidance for conducting cost comparisons to determine if the private sector could provide aviation services at a lower cost. Circular No. A-11, Part 7, establishes policy for planning, budgeting, acquisition, and management of federal capital assets, including aircraft, and requires agencies to submit a capital asset plan and business case summary (an “Exhibit 300”) for all major capital investments, including aircraft acquisitions and overhauls. The exhibit should demonstrate that the agency analyzed three alternatives and calculated the life cycle costs for each. OMB provides procedural and analytic guidance, including its “Capital Programming Guide,” for implementing specific aspects of this policy. OMB Circular No. A-126 also sets out responsibilities for GSA regarding aircraft management. In implementing this circular, GSA establishes governmentwide policy on various aspects of aircraft management, including procurement, operation, safety, and disposal, and publishes its regulatory policies in the Code of Federal Regulations. GSA, through the Interagency Committee for Aviation Policy, also published a number of other guides and manuals to help agencies manage aircraft acquisitions, use, and disposal. Its “Fleet Modernization Planning Guide,” in particular, aids programs in developing cost-effective fleet replacement plans. A comprehensive aviation fleet management planning process detailed in guidance that OMB and GSA have issued can help federal aircraft programs ensure that they acquire, manage, and modernize their aircraft in a cost-effective manner. Sound fleet management decisions should be based on a comprehensive process that relies on three key principles: (1) assessing a program’s long-term fleet requirements, (2) acquiring the most cost-effective fleet of aircraft to meet those requirements, and (3) assessing fleet performance to determine if the needs are being effectively met. Figure 3 illustrates the fleet management planning process, showing that it is a continuous cycle of planning and analyses. Although INL has made limited progress since we first assessed the Air Wing’s aviation fleet management in 2004 in adhering to OMB and GSA guidance, the bureau plans to undertake a more systematic management approach beginning in 2007. The bureau has not (1) conducted a strategic assessment of all long-term fleet requirements, (2) justified new aircraft investments in a systematic way that considers the range of alternatives and life cycle costs, or (3) routinely reviewed the performance of the fleet to ensure that its composition is the most appropriate and cost-effective to achieve the bureau’s missions. In August 2006, we shared our observations with INL officials about INL not adhering to OMB guidance, particularly in justifying new aircraft investments and analyzing the composition of the aviation fleet. In September 2006, after completing its own review of aviation program operations, INL officials told us that in October 2006 they would be initiating a number of steps to resolve weaknesses we observed. According to GSA’s guidance and the OMB “Capital Programming Guide,” a strategic assessment of the long-term fleet requirements is the foundation of fleet management because it identifies future workload requirements that serve as the basis for aircraft needs. The assessment process includes specific analyses, such as an assessment of the number of flight hours needed to meet mission requirements over a multiyear period and the capability of existing aircraft to meet those requirements cost effectively. The guidance recommends that, if shortfalls in the current mix of aircraft are identified, managers should determine the optimal mix of aircraft to meet anticipated flight hour and mission requirements and develop a proposed fleet acquisition or replacement plan to achieve the desired mix of aircraft. This plan could include an anticipated schedule of time frames for disposing of inadequate aircraft and procuring replacements. In 2004, we reported that Air Wing had not engaged in long-term planning to estimate the future, long-term mission requirements and what mix of aircraft was best suited for these requirements. Fleet planning was primarily short-term in nature and focused on identifying aircraft to meet current and the next year mission requirements. Since 2004, INL has prepared a strategic plan and a Critical Flight Safety Program for Air Wing operations. The Air Wing’s strategic plan addresses the goals and long-term needs of its program in terms of operations, maintenance, logistics, safety, administrative/contract support, and information technology and communication. While the strategic plan does not analyze the flight hours needed to meet mission requirements, it does specify other operational requirements, including the total area of illicit crops to be sprayed and eradicated over a number of years. The strategic plan, completed in April 2004, also indicates the mix of aircraft assets and personnel necessary to meet these goals. The Air Wing’s Critical Flight Safety Program specifies how the Air Wing plans to achieve the goals in its strategic plan with the aircraft available—primarily through a combination of aircraft overhauls and aircraft acquisitions. However, the Air Wing strategic plan and accompanying Critical Flight Safety Program did not address the aircraft needs of several INL aviation- related activities. For example, the strategic plan did not estimate the operational requirements or flight hours needed to continue supporting the Colombian Army’s operations, including protection of the Caño Limón- Coveñas pipeline. Further, the Critical Flight Safety Program did not address the long-term aircraft needs of other INL aviation-related programs, such as assistance to the Colombian National Police or the Colombian Air Force’s Air Bridge Denial Program, or assistance to Mexico’s Office of Attorney General. These other programs represent over a third of INL’s active aviation fleet and, in some cases, aircraft are closely related to Air Wing operations, such as the aircraft that NAS Colombia provided to the Colombian National Police to support aerial eradication. According to INL officials, INL plans to conduct an aviation fleet study in fiscal year 2007. The study is expected to take 9 months to complete and include a needs analysis of INL’s current aviation fleet. The resulting report is expected to specify aircraft requirements in terms of a number of variables, including payload, range, speed, endurance, availability, and maintainability, among other factors. This study is intended to form the basis of a long-term plan for all aviation-related programs in 2007. Until recently, INL had not taken actions to prepare the analyses prescribed by OMB and GSA to help justify aviation fleet investment decisions. Nor had INL established a set of policies and procedures for aviation acquisitions. According to GSA guidance, after identifying potential aircraft and developing a proposed fleet replacement plan, aviation managers should develop a series of analyses to identify and acquire the most cost-effective aircraft to meet mission needs. These analyses should include preparing a study, as described in OMB Circular No. A-76, to determine whether the aviation operations should be performed by the government or contracted to the private sector. Also, for all major investments, agencies should prepare a capital asset plan and business case summary as described in OMB Circular No. A-11, Part 7, Exhibit 300, which should include the results of an analysis of three alternatives, in addition to the current arrangement, to help ensure that the most cost-effective investment is selected. For this comparison of alternatives, a life cycle cost analysis is needed to provide managers with important information concerning the total cost of operating and maintaining an aircraft over its useful life. Such documents should be prepared to support acquisition of new aircraft, as well as modernization or enhancements of aircraft already in operation. Once these analyses are completed, aviation managers should obtain senior management approval and then acquire needed aircraft or commercial aviation services. In 2004, we reported that INL used no set criteria for Air Wing aircraft acquisitions and could not provide any A-76 cost comparison studies or cost-benefit analyses supporting its aircraft acquisitions. According to INL officials, the exigent circumstances of its programs precluded preparation of cost-benefit and other detailed analyses. INL acquired a large number of aircraft since 2000 to support Plan Colombia and other counternarcotics and counterterrorism efforts in Colombia, including 33 UH-1N, 25 UH-II, and 14 UH-60 Black Hawk helicopters. Some of the aircraft acquired were surplus aircraft that were made available under a relatively short time frame; other aircraft acquisitions were congressionally directed. Since 2004, INL has continued to make multimillion dollar investments in its aviation fleet, both by acquiring new aircraft or refurbishing older aircraft it had previously acquired, without conducting the analyses prescribed by OMB. According to officials we spoke to at OMB, the Air Wing, NAS Colombia, and the Office of Latin America Programs, the bureau has never prepared OMB-required justifications, as laid out in Circulars Nos. A-76 and A-11, for any of its aircraft investments. For example, in fiscal year 2006, the Air Wing began implementing its Critical Flight Safety Program, expected to cost a total of $356 million over 6 years, to upgrade and overhaul the aviation fleet used for Air Wing operations. This investment includes refurbishing several Vietnam era OV- 10 observation airplanes and UH-1N helicopters to extend their useful life and make them more commercially supportable and procuring new UH-60 and UH-II helicopters. However, the documentation Air Wing provided us did not include cost-benefit analyses of alternatives or a calculation of life cycle costs for each element of the program. Similarly, in 2004, INL began acquiring 12 new Schweizer SAC 333 helicopters to support Mexico’s Office of Attorney General’s antinarcotics efforts, at a cost of about $15 million, without conducting the analyses called for in OMB Circular Nos. A-76 and A-11 to justify the acquisitions. According to OMB and GSA officials we consulted, without the analyses called for in OMB guidance, State cannot be reasonably certain whether the aircraft procurements and refurbishments reflected in their budgets are the most appropriate and economical alternatives. In particular, we found little evidence that important cost and operational considerations were formally taken into account when INL decided to refurbish OV-10 spray aircraft. A NAS Colombia official indicated that this was not an appropriate investment for Colombia because, among other reasons, the Colombian government does not have the capacity to maintain these aircraft after U.S. support for the aerial eradication program ends. The NAS, therefore, decided to purchase new and commercially available AT- 802 crop dusting aircraft to conduct aerial eradication in Colombia. Representatives of DynCorp International, the contractor responsible for maintaining both aircraft, argued that the AT-802 was more practical and cost-effective to maintain than the OV-10. Air Wing officials considered the OV-10 refurbishment to be more appropriate because it kept in service an aircraft with important safety characteristics, including a dual engine configuration and ejection seats. A formal analysis of alternatives, including a calculation of life cycle costs, could have weighed these considerations on a more objective basis. INL has recently taken steps to justify its aircraft investment decisions better. As directed by an appropriations conference committee, INL prepared an analysis of alternatives for the procurement of new spray aircraft to support its aerial eradication program in Colombia. In August 2006, we shared our observations with INL about the lack of supporting analysis for it aircraft investments. In October 2006, INL tasked a private consulting firm with conducting analyses in accordance with OMB Circulars Nos. A-11 and A-76 to justify an aircraft acquisition intended to replace leased transport aircraft to support counternarcotics activities in Afghanistan. In addition, as part of its 2007 fleet study, INL has tasked the same consulting firm to prepare a capital asset plan and business case, in accordance with OMB Circular No. A-11, that would identify and analyze alternatives for filling INL’s aircraft needs. OMB Circular No. A-126 requires agencies to issue internal directives and policies for acquiring and managing aircraft. Responsibility for implementing these policies should be assigned to a senior management official who has the agencywide authority and resources to implement them. INL has not established bureauwide directives and policies relating to aviation acquisition that incorporate OMB guidance. Program managers we spoke to at INL and NAS Colombia were unaware of key OMB acquisition guidance and were unsure about the roles and responsibilities of the various INL offices in preparing the justification OMB circulars call for. INL plans to issue an “Aviation Program Policy Guide” that will set forth policies, procedures, and responsibilities for managing the bureau’s aviation fleet and serve as a vehicle for planning, coordination, and dispute resolution. While INL has designated the director of the Air Wing as the senior aviation management official for its aviation fleet, this official’s authority, roles, and responsibilities will be defined in the policy guide, according to an INL official. INL expects that the policy guide will be completed in 2007 and reflect OMB guidance about justifying aviation fleet investments. According to OMB Circular No. A-126, agencies are required to review periodically the continuing need for all of their aircraft and the cost- effectiveness of their aircraft operations, and then should report any excess aircraft and release all aircraft that are not fully justified by this review. A copy of each agency review should be submitted to GSA and to OMB with the agency’s next budget submissions. Federal regulations call for such studies every 5 years. Finally, managers should incorporate the results of their periodic Circular No. A-126 reviews into their long-term fleet planning process and make adjustments to their fleets as needed. We found that INL has neither assessed the composition of its aviation fleet, nor fully tracked cost and usage of its aircraft. In 2004, we reported that INL had not followed OMB Circular No. A-126 for reviewing the composition of its entire fleet to ensure its cost- effectiveness. According to INL officials we spoke with, the bureau has still not conducted the type of fleet review that is called for under OMB Circular No. A-126. Without such a review, INL cannot demonstrate that the composition of its fleet and planned additions to it are appropriate and cost-effective. INL has included in the scope of its fleet study an assessment of the soundness of the fleet composition and possible alternative aircraft or approaches to consider. Also, as part of the fleet study, the bureau plans to identify cost-effective performance measures that can be used in an annual performance plan. Detailed cost and usage data are critical for assessing the cost- effectiveness of aircraft, and Circular No. A-126 and related federal regulations require agencies to collect this data in a standardized format for their entire aviation fleet. One of the most common measures used to evaluate the cost-effectiveness of various aspects of an aircraft program is expressed as the cost per flying hour for certain types of aircraft costs. Other measures include, but are not limited to, maintenance costs per flying hour; fuel and other fluids costs per flying hour; and accident repair costs per flying hour (or per aircraft). Federal regulation 41 C.F.R. 102.33- 425 requires federal agencies to accumulate and report to GSA aircraft usage data and the cost of operating each aircraft based on the standard aircraft program cost elements defined in OMB Circular No. A-126. In 2004, we reported that State’s fiscal year 2000 through 2002 aircraft program costs reported to GSA were significantly understated. However, State’s information systems do not capture the data necessary for INL to fully adhere to OMB guidance and related federal regulation regarding compiling and reporting data on the cost and usage of its aviation fleet and individual aircraft. In a September 2006 audit of State’s aircraft, the State Office of Inspector General determined that State did not have a comprehensive and effective cost management system to record, maintain, and report timely, reliable data on its aircraft. To provide GSA the required information on aircraft cost and usage, the Air Wing developed an information system called the Air Wing Information System that compiles cost and usage data such as flight hours per aircraft and other related information. However, although the system captured cost and usage data for the 179 aircraft managed by the Air Wing, it did not do so for the 98 aircraft managed by the Office of Latin American Programs and the NAS offices in Colombia, Ecuador, Peru, and Mexico. Additionally, due to financial management system deficiencies and weaknesses in key internal controls, INL could not provide us sufficiently reliable data on the status of the funds allocated for its aviation fleet. We requested from INL the amounts obligated, expended, and available from fiscal year 2001 through 2005 appropriations used to acquire, operate, and maintain its aviation fleet. INL could not provide the necessary data because its financial management systems do not readily identify aviation- related costs, even though State has taken steps to improve data completeness and additional improvement efforts are under way. Further, the systems do not accumulate data on the cost of operating individual aircraft based on the standard cost elements prescribed by OMB, such as costs related to crew, maintenance, engine overhaul, and fuel. INL officials told us that it would have to conduct a manual review of thousands of transaction documents to identify all aircraft costs. We also noted weaknesses in key internal controls over the recording of financial transactions and management of funds. Specifically, INL had limited bureauwide written procedures addressing (1) how its staff should reconcile the financial records that overseas posts independently maintain with State’s Regional Financial Management System or (2) an effective method of reviewing outstanding obligations for identifying excess funds. These controls are critical to ensuring accurate and complete data on the status of funds allocated for INL aviation fleet. Although State is implementing two new financial management systems, neither is designed to generate the detailed data INL needs to analyze the cost-effectiveness of its aviation fleet. INL is spending about $1 million to implement a new bureauwide financial management system, called the Local Financial Management System, to standardize how each NAS records financial activity to provide more visibility over NAS financial activity to INL headquarters. State’s Bureau of Resource Management is also implementing a new departmentwide financial management system called the Global Financial Management System. However, like the existing systems, neither the new INL nor the departmentwide systems incorporate the standard program cost elements outlined in OMB Circular No. A-126 and the related federal regulation. Officials in State’s Bureau of Resource Management informed us that they were not familiar with INL’s cost data requirements when designing the Global Financial Management System. Without the ability to accumulate and summarize aircraft costs by standard program elements, INL will be limited in determining whether its aviation fleet is managed in a cost-effective manner. State has taken steps to improve its ability to compile and report aircraft cost data, such as establishing appropriate codes in its accounting system. Further, INL plans to assign responsibility for reporting cost and usage data for all INL aircraft to the Air Wing, regardless of which office manages the aircraft. According to an Air Wing official, the Air Wing plans to modify the Air Wing Information System to capture fleetwide cost and usage data. The Air Wing, according to the same official, expects the modification to help INL meet GSA reporting requirements and greatly improve its ability to capture selected aircraft cost elements. Furthermore, INL plans to use OMB Circular No. A-126 standard aircraft program cost elements to prepare a template to standardize budget line items used for all aviation-related programs. Finally, State officials responsible for implementing the Global Financial Management System told us that they plan to address INL’s cost data requirements after the system is implemented in fiscal year 2007 but were not sure whether the new system can provide the cost data capabilities needed by INL. Federal regulations require federal agencies to develop and perform contract quality assurance procedures to verify that services and supplies provided conform to contract requirements and to maintain suitable records enumerating quality assurance actions. Since 2004, State regulations have specified a policy that all new service contracts be performance-based, with clearly defined deliverables and performance standards. The aviation support contracts with DynCorp International, Lockheed Martin, and ARINC comply with these regulations and use performance-based metrics to assess contractor performance; however, the Lockheed Martin and ARINC contracts make less intensive use of such metrics. Currently the responsibility for contractor oversight for Lockheed and ARINC is divided among NAS officials in Colombia, the government task leaders in Washington, D.C., and the contractors located in Colombia. INL plans to centralize contractor oversight by assigning Air Wing staff the responsibility for managing all aviation support contractors. In 2005, State and DynCorp International entered into a new performance- based contract whereby State and DynCorp assess contractor performance using an extensive set of indicators. The contract establishes standards for several functional areas across which performance is measured, including maintenance, logistics, operations, safety, and training. Within these areas, State and DynCorp track 84 specific performance metrics (such as hectares of illicit crop eradicated, percentage of total aviation fleet available, and host nation training hours performed) to help assess DynCorp’s performance. In order to manage the oversight of these performance categories, State and DynCorp use an online tracking system and database; however, this system has limitations. The performance categories in this system, called SeeSOR, correspond to the 84 metrics specified in the contract. The SeeSOR system provides a quality assurance checklist for each activity and an inspection schedule, regularly prompting State and DynCorp managers to enter performance information. In the case of poor ratings, the system automatically produces corrective action reports, and DynCorp managers track these corrective actions by time and inspector. Each corrective action issue requires a response within a specified time frame. However, INL and DynCorp officials in Colombia told us that SeeSOR is not a comprehensive quality assurance or contract management tool because it does not include certain activities that DynCorp performs. For example, force protection, which DynCorp does in Colombia, is not incorporated into the system. Also, computer network difficulties make regular data entry from remote locations in the field problematic. Between November 2005 and September 2006, DynCorp conducted over 400 audits of information in the system and found 104 issues requiring corrective action. State and DynCorp are still fine tuning the system to improve its ability to measure contractor performance in an environment such as Colombia. Consequently, State and DynCorp use means other than the SeeSOR system, such as personal contact, to help oversee contractor performance. According to INL officials in Colombia, personal contact between INL and DynCorp is the most valuable monitoring tool. INL and DynCorp personnel talk and exchange e-mails throughout the day to identify issues that need attention. In addition, State program managers conduct daily “walk through” inspections of facilities in Bogotá and make unannounced site visits to forward operating locations. The DynCorp manager in Colombia also relies on daily oral communication with contractor staff outside of Bogotá to stay aware of issues in the field. Further, DynCorp provides a detailed briefing to INL every week, which addresses performance across all functional areas of the contract. All advisers and managers are expected to attend, and minutes are kept of these meetings. Also, State conducts monthly reviews of aircraft and eradication reports and formally evaluates DynCorp’s performance every 4 months. State and Lockheed Martin began a 4-year contract in July 2006 that implements a performance-based method for assessing contractor performance. In the contract, Lockheed Martin and its subcontractor work closely with the Colombian National Police to support its illicit drug eradication and interdiction and humanitarian missions, with responsibility for aircraft maintenance, logistics, police training, and multiple construction projects at bases across Colombia. Under the contract, the government task leader must hold regular status meetings, and Lockheed Martin is required to submit monthly performance reports containing, among other things, accomplishments and issues that arose during the reporting period, projected future activities, and subcontractor performance relative to agreed upon metrics. Lockheed Martin also must implement and maintain a quality assurance system to ensure that product and service integrity meet or exceed contract requirements. In addition, the contract specifies specific performance standards in program management, quality control, safety, aircraft maintenance, logistics, support maintenance, training, and information technology. For example, one maintenance standard specifies that the contractor sustain a 75 percent aircraft operational readiness rate. In Colombia, we observed that State maintained regular contact with the contractor and the Colombian National Police to assess compliance with contract requirements. State’s oversight measures included monitoring performance in functional areas, such as maintenance, logistics, training, and safety. While the program manager stated that the principal performance metric was aircraft readiness rates, State received and reviewed daily and monthly status reports, memos, and trip reports, and participated in a quarterly program management review covering the functional areas above. State attends weekly maintenance meetings between Lockheed and the Colombian Police. In addition, the contractor also performed random site visits and fuel inspections, reporting the findings to the NAS program manager. The NAS uses standards from the U.S. Army Aviation Management System to produce and issue standard operating procedures in most functional areas. However, the planned nationalization of this program and its heavy involvement with and dependence on the Colombian National Police has presented challenges to implementing an effective performance-based contract. Personnel problems within the police force have adversely affected the contractor’s ability to meet nationalization or its aircraft readiness rate goals. For instance, the contractor reported absentee rates among police trainees in the logistics branch as high as 25 percent. Further, over half of Colombian helicopter mechanics were not sufficiently skilled to perform more than routine maintenance tasks and, therefore, required more contractor supervision than planned. The contractor also has little influence over personnel decisions within the police force. The Colombian police frequently rotate their trainees to different positions, which hampers the development of specialized skills and the ability of the contractor to pass on responsibility and nationalize the program. For example, the contractor logistics office reported working with seven different Colombian officers in 4 years. In another case, a police trainee attended a specialized and lengthy course for engine oil analysis in the United States, only to be transferred out of maintenance shortly after returning to Colombia. Program managers have corrected some of these training issues and now track the training Colombian police personnel receive to help ensure that only committed and appropriately skilled trainees receive detailed instruction. Under the ARINC contract, which began in June 2004 and could extend to 4 years, State has established a performance-based system to monitor contractor performance. Our review focused on State’s assessment of ARINC’s contract performance in support of the Air Bridge Denial Program. The objective of this program is to suppress the illicit aerial trafficking of narcotics in Colombian airspace by tracking and forcing down suspected traffickers. ARINC is responsible for, among other things, maintaining seven aircraft and training Colombians to maintain these aircraft. Reporting requirements include monthly performance reports containing, among other things, accomplishments and issues that arose during the reporting period, projected future activities, and performance relative to the agreed upon metrics. In addition, the contract specifies performance standards in operations, logistics, training, and project management, among others. We found that State was in compliance with contractor oversight requirements and that, in many cases, ARINC exceeded its reporting requirements in the contract. State is in daily personal contact with the Air Bridge Denial Program manager. Although the program manager told us that he did not have the quality assurance plan required by the contract, we found that ARINC’s reporting to State included many of the quality assurance plan requirements, such as training standards and reviews. The NAS program manager also received and reviewed flight activity reports on a daily, weekly, and monthly basis. In addition, the program manager participated in regular meetings to discuss the status of aircraft, training, and operations, and conducted a semiannual review, as well as an annual program certification. The program manager made monthly visits to Air Bridge Denial locations and shared trip reports with cognizant State officials. INL plans to centralize contractor oversight by assigning Air Wing staff the responsibility for managing all aviation support contractors. Under this arrangement, INL plans to compile information on aircraft performance in one central location. This will enable INL managers to assess performance of the entire fleet more consistently, and more readily collect the data that INL needs to assess the overall composition and cost-effectiveness of the aviation fleet. INL’s aviation fleet has grown at a rapid pace to meet emerging, global counternarcotics and counterterrorism priorities. However, INL did not systematically employ federal management principles and guidelines in acquiring this fleet. As a result, key analyses were not done to help ensure that INL program managers made cost-effective decisions, particularly with regard to major investments in the fleet. In October 2006, INL officials began initiating significant changes to the oversight of aviation fleet operations, placing particular emphasis on conducting key analyses of its fleet to help guide future investment decisions and adhere to OMB and GSA guidance. If INL follows through, these analyses should result in a long-term plan for aircraft investments and an assessment of the current composition of the fleet to help ensure that it is the most cost-effective to meet mission requirements. Current plans call for these to be completed in 2007. Since INL has undertaken a number of initiatives to address the management weaknesses we observed, we are not making any recommendations in this report. However, we will follow up with INL to ensure that these initiatives have been completed in 2007, as planned. State provided comments on a draft of this report (see app. II). In its comments, State acknowledged that our work, among others, was an impetus for a comprehensive internal review of aviation management and expressed appreciation to us for confirming areas needing continued improvement. State highlighted the management reforms INL has undertaken to enhance the efficiency and effectiveness of aviation fleet management, as well as to improve INL’s adherence to OMB and GSA guidance. State also noted operational circumstances that make such adherence challenging. State disagreed with our observation that INL did not provide fleet investment justifications using cost-benefit and life-cycle analyses of alternatives. State indicated that considerable analysis was done to evaluate economically sound alternatives in most previous aircraft investments. However, the documentation INL provided did not include the analyses called for by OMB guidance. Without documentation of such analyses, we were not able to assess whether State’s investment decisions were appropriately justified in accordance with this guidance. In addition to these comments, State provided us technical comments, which we have incorporated throughout the report, as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of State. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www/gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4268 or FordJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To assess the extent to which the Bureau of International Narcotics and Law Enforcement Affairs (INL) has managed its aviation fleet in accordance with Office of Management and Budget (OMB) and General Services Administration (GSA) guidance, we reviewed the applicable guidance and discussed progress in adhering to this guidance with State and contractor officials at INL offices in Washington, D.C.; Melbourne, Florida; and Colombia, reviewing relevant documents, where appropriate. We chose to focus our review primarily on operations in Colombia because nearly two-thirds of INL’s active aviation fleet is in that country, and three contractors carry out programs there. We obtained information on the number and distribution of aircraft from INL officials and determined that this information was sufficiently reliable for the purposes of this report. To assess how INL conducted strategic planning to identify long-term aircraft needs, we obtained and reviewed planning documents prepared by the bureau, including Bureau Performance Plans, the Air Wing Strategic Planning Summary, and the documentation of the Critical Flight Safety Program. We reviewed the content of the planning documents and determined the extent to which it reflected information provided about INL fleet operations we obtained from INL officials. We based our assessment of the bureau’s planning documents on federal guidance described in a prior report on federal aircraft. To assess how INL justified investments in its aviation fleet, we reviewed documentation, where available, and discussed major ongoing and planned aircraft investments with relevant bureau officials. In particular, to assess the justification for the Critical Flight Safety Program, we discussed this program with Air Wing officials and DynCorp representatives in Melbourne, Florida, and with NAS officials in Bogotá, Colombia. We discussed justification of aircraft acquisitions for counternarcotics programs in Mexico with officials from the INL Office of Latin America Programs. We based our assessment of justification documentation on federal guidance presented in OMB Circulars Nos. A-76, A-126, and A-11 Part 7, and OMB’s “Capital Programming Guide.” To obtain a better understanding of the applicability of this guidance, we spoke with officials from OMB and GSA’s Aircraft Management Policy Division, and a representative from Conklin and DeDecker Company, a GSA contractor that assists federal agencies with developing cost-benefit analyses for their aviation programs. To determine how INL assessed the cost and performance of its aircraft and ensured that the composition of the fleet was cost-effective, we reviewed documentation and interviewed bureau officials in Washington, D.C.; Air Wing officials in Florida; and Narcotics Affairs Section (NAS) officials in Colombia. To obtain an understanding of the systems and procedures INL used to track its aircraft funding and related obligations and expenditures, we gathered information from INL accounting and budget officials. We reviewed INL financial management handbooks as well as automated systems’ documents. We identified and evaluated key internal controls INL uses to ensure the completeness and accuracy of recorded appropriated funds and the status of those funds. We assessed INL’s reconciliation procedures with requirements found in the Comptroller General’s Standards for Internal Control in the Federal Government. To identify and report the amount of funds allocated to aviation activities, we obtained and reviewed Congressional Notifications and Congressional Budget Justifications for the Andean Counterdrug Initiative and International Narcotics Control and Law Enforcement appropriations. We identified aviation related activities and compiled funding data for these activities by appropriation and fiscal year. Based on our efforts to determine the reliability of the aviation activity allocations, we concluded that these data were sufficiently reliable for the purposes of this report. We also planned to review a statistical sample of INL aircraft financial transactions for fiscal years 2001 to 2005 to assess the reliability of recorded aircraft financial data. For each fiscal year, we requested the total appropriated funds used for aircraft acquisition, operation, and maintenance, along with the status of those funds—amounts obligated, expended, and available. For obligated and expended funds, we requested separate, detailed transaction-level data files that supported the obligation and expenditure levels reflected in the status of the fiscal year funds. State provided data files from NAS offices, its Office of Aviation, and its Central Financial Management System. We performed a data reliability analysis of the files provided to determine whether we could use the data files for the selection of our planned statistical sample. We noted that the data files were not complete; for example, NAS offices and the Office of Aviation data files did not include detail listings of expenditure transactions. We also noted inconsistencies in the data files State provided us. For example, the Central Financial Management System data files contained expenditure records for the Office of Aviation but no related obligation records. Due to the aircraft financial data files being incomplete and the inconsistencies we identified, we determined that the INL aircraft financial data files were not sufficiently reliable for our planned statistical sampling. To assess how INL monitors its contract costs and performance, we gathered and analyzed contract documents and interviewed agency and contract officials to determine each contract’s scope, activities covered, and oversight requirements. In addition, we interviewed contract office representatives from the three main contractors identified in our review: Lockheed Martin Systems Management, LLC; DynCorp International LLC; and ARINC Engineering, LLC, and analyzed contract documents and reports to determine performance issues. The Lockheed and ARINC contracts are not directly between State and the companies, but are task orders under indefinite quantity, indefinite delivery contracts between Lockheed and ARINC, respectively, and the U.S. Army Communications and Electronics Command in Fort Monmouth, New Jersey. In Colombia, we discussed aircraft operations and maintenance issues with NAS, Air Wing, and contract staff at various operational sites in the country. We met officials with primary responsibility for the Colombian Police’s aerial eradication program and the Colombian Army’s aviation program at the Office of Aviation headquarters at El Dorado Airport in Bogotá. We also met with managers, pilots, and mechanics and observed flight operations and maintenance at two aerial eradication operating locations— Barrancabermeja and San Jose—and the Colombian Army Aviation Brigade headquarters at Tolemaida. In addition, we met with NAS and contractor staff overseeing Colombia’s Air Bridge Denial Program at Apiay and the Colombian National Police’s aviation program at the Colombian police main operating base at Guaymaral near Bogotá. The following are GAO’s comments on the Department of State letter dated January 22, 2007. 1. We acknowledge INL’s efforts to address the shortcomings that GAO and others have identified in its management of its aviation fleet throughout the draft report and this final version. However, we note that we began our review efforts with a formal notification to the Secretary of State in January 2006 and met with INL and other State officials to discuss our objectives in February 2006. At the time, INL did not inform us of any ongoing or planned efforts to evaluate its Office of Aviation Programs or INL’s overall aviation fleet. In August 2006, we briefed INL on our preliminary findings that it had not complied with OMB and GSA guidance in managing its aviation fleet. The September briefing that INL presented to us addressed the issues we had raised and laid out the reforms it would begin in October. As a result, we concluded that recommendations for further actions were not necessary, but that we would follow up at a later date to ensure that INL’s initiatives are completed, as planned. 2. We do not report that INL did not conduct any analyses. Rather, we noted in the draft report and this final version that documentation INL provided us did not reflect the key analyses called for by OMB guidance. 3. In the draft report and this final version, we report that INL officials told us that the exigent circumstances of INL’s operations precluded them from doing the detailed OMB analyses. We also noted that, in some cases, Congress directed what aircraft to procure. Nevertheless, once the aircraft are in the inventory, OMB guidance requires agencies to review periodically the need for and the cost-effectiveness of the aircraft. INL has not done this, but we noted in the draft report and this final version that it has efforts under way to meet this requirement. 4. In the draft report, we noted that INL’s aviation operations in Colombia often take place in hostile environments, which can place aircraft and personnel under small arms fire. We have modified the final report to note that aviation operations in other foreign locations often take place in hostile environments, too. 5. In the draft report and this final version, we point out that INL’s Local Financial Management System does not provide the standard program cost elements needed to meet OMB requirements. We also note that State officials responsible for designing the Global Financial Management System were not aware of INL’s cost data requirements and are not sure the system can provide the data needed. Regarding the Air Wing Inventory System referred to, we reported in 2004 that the data in this system were significantly understated. We agree that, if the system’s shortcomings are corrected, it is an appropriate tool to address GSA’s reporting requirements. 6. We agree that consolidating INL’s aviation fleet under a senior aviation management official is one way to address some of the shortcomings GAO, State’s Office of the Inspector General, and INL’s internal studies have identified. However, INL has not defined the senior aviation management official’s authority, roles, and responsibilities. This is under development and will be part of INL’s aviation program policy guide, which INL expects to complete later this year. In addition to the individual named above, key contributors to this report were A.H. Huntington, III, Assistant Director; Felicia Brooks; Joseph Carney; Kay Daly; Mattias Fenton; James Michels; Sylvia Schatz; Ann Ulrich; and Leonard Zapata.
The Department of State's (State) Bureau of International Narcotics and Law Enforcement Affairs (INL) owns 357 helicopters and fixed-wing aircraft (valued at over $340 million) primarily to help carry out its counternarcotics efforts, such as aerial eradication of drug crops in Colombia. INL relies on contractor support to help maintain and operate its aircraft. In 2004, GAO analysis showed that INL lagged behind other agencies in implementing Office of Management and Budget (OMB) and General Services Administration (GSA) aviation fleet management principles. GAO was mandated to review INL's management and oversight of this fleet. GAO specifically examined (1) the extent INL has complied with OMB and GSA aviation fleet management guidance and (2) how INL has overseen its aviation support contracts. Since INL has undertaken initiatives to address the weaknesses GAO observed, GAO makes no recommendations. GAO will follow up to ensure that these initiatives are completed, as planned. In comments on this report, State highlighted reforms under way. State also indicated that INL conducted analyses to justify most aviation investments. GAO notes, however, that the documentation provided did not reflect the key analyses called for by OMB guidance. Despite some improvements since 2004, INL has not yet employed a systematic process for managing its aviation fleet that adheres to OMB and GSA guidance intended to help federal programs ensure that they acquire, manage, and modernize their aircraft in a cost-effective manner. However, in October 2006, INL began a number of initiatives to improve compliance with this guidance. The guidance entails three key principles: (1) assessing a program's long-term fleet requirements, (2) acquiring the most cost-effective fleet of aircraft to meet those requirements, and (3) assessing fleet performance. INL's initiatives are intended to address weaknesses in the following three areas: (1) Long-term planning. Since 2004, INL has prepared a strategic plan and a Critical Flight Safety Program to refurbish certain aircraft and replace others to meet projected mission needs. However, this effort did not address the long-term aircraft needs of all INL aviation programs. (2) Fleet investment justifications. INL has funded multimillion dollar aircraft investments, including the acquisition of new aircraft and major overhauls of older assets, without documenting cost-benefit and life cycle cost analyses of alternatives. (3) Fleet composition assessment. INL has not reviewed the composition of its entire fleet to demonstrate that its aircraft are the most appropriate and cost-effective to meet mission requirements. INL is hampered in assessing the performance of its fleet because it does not have complete and reliable aircraft cost and usage data. INL has undertaken a study to assess the aviation fleet's overall composition, identify investment needs, and assess alternative approaches for meeting those needs. INL expects completion of this and other initiatives in 2007. Regarding contract oversight, INL has met applicable federal, agency, and contract-specific requirements for managing its aviation support contracts. In addition to direct contractor oversight, State has used quantitative measures, primarily aircraft readiness rates, to monitor and assess contractor performance.
We identified three national databases that, as part of broader data collection efforts, collect information on the occurrence of concussion in high school sports, but they do not provide an overall national estimate of occurrence. These databases are the NCCSI database, the CPSC’s National Electronic Injury Surveillance System (NEISS), and the Center for Injury Research and Policy’s High School RIO. (See table 1 for descriptions of the databases.) According to experts and federal officials, while none of the databases can provide a national estimate of the occurrence of concussion in high school sports, two of them provide national estimates of the occurrence of concussion for the populations they study. High School RIO provides national estimates of the occurrence of concussion in 20 sports for high schools with certified athletic trainers, based on its sample of 100 high schools with certified athletic trainers. Because it collects data on participation in the sports it studies, High School RIO also calculates injury rates by sport and by sex. NEISS provides national estimates of the occurrence of concussion treated in hospital emergency departments, based on its random national sample of approximately 100 hospitals with 24-hour emergency services. The third database, NCCSI, provides information on cases of concussion with serious complications, but it cannot provide national estimates of occurrence of all concussions. According to experts and federal officials, High School RIO and NEISS have certain strengths. The information collected by High School RIO is timely, as athletic trainers in the sample schools report data on a weekly basis. According to CPSC officials, the information collected by NEISS is also timely, in that hospitals in the sample report information on a daily basis and NEISS receives approximately half of the data within 4 days of the patient’s being seen in the emergency department. In addition, both High School RIO and NEISS collect information in ways—such as through certified athletic trainers and through review of medical charts, respectively—that experts report to produce more reliable information than other methods. Experts and federal officials have noted that notwithstanding these strengths, the national estimates provided by High School RIO and NEISS may be underestimates of the overall national occurrence of concussion in high school sports. For example, High School RIO gathers information only on concussions that are reported to or observed by a certified athletic trainer, but, according to officials from an athletic trainers’ association, athletes may be reluctant to report symptoms of possible concussions to athletic trainers to avoid being removed from play. In addition, the athletic trainers cannot be present at all practices and games and the coaches and parents who are present may not recognize the signs or symptoms of a concussion, resulting in an underestimate of the actual number of concussions in the schools studied. Further, some athletes may consult their family physician about signs and symptoms of a possible concussion without reporting it to the athletic trainer. These concussions would not be included in the database. In addition, because High School RIO collects information on only 20 sports, its data cannot be used to estimate the occurrence of concussion in all sports. Similarly, NEISS gathers information only on concussions in patients who are treated in emergency departments, but not all athletes with a concussion go to an emergency department for treatment. Furthermore, the medical charts that are reviewed by hospital staff for NEISS may not always indicate detailed circumstances of the concussion, and therefore the staff may miss some concussions that were sustained during athletic participation. Experts and federal officials identified additional features of the databases that may lead to further uncertainty and thus preclude the use of the data to provide comprehensive national estimates of concussion in high school sports. For example, High School RIO does not collect data from schools that do not have certified athletic trainers, and researchers do not know how the occurrence and reporting of concussion in schools with athletic trainers differ from schools without athletic trainers or what effect any difference would have on estimates of occurrence. In addition, according to CPSC officials, NEISS cannot always indicate whether a concussion was sustained during participation in a sport or simply involved sports equipment. For example, NEISS would count a concussion sustained by a person who was hit on the head with a baseball bat as a sports-related concussion, regardless of whether or not the injury was incurred during a baseball game or practice. CDC’s Heads Up: Concussion in High School Sports is the primary federal program directed specifically at preventing concussion in high school sports. The program, which is one of CDC’s educational initiatives, is intended to provide educational materials for coaches, athletic trainers, athletic directors, parents, and athletes to prevent concussion. The Heads Up: Concussion in High School Sports tool kit includes a concussion guide for coaches with information on signs and symptoms and strategies for preventing concussions, a coach’s quick-reference wallet card, a coach’s clipboard sticker with concussion facts and space for emergency medical contacts, two fact sheets—one for parents and one for athletes—in English and Spanish, an educational DVD, posters for school gymnasiums, and a disc that contains additional resources. According to CDC officials, the Heads Up: Concussion in High School Sports materials were developed by a panel of experts from CDC and outside the federal government. CDC rolled out the Heads Up: Concussion in High School Sports program in September 2005 to coincide with the beginning of the school year. As part of the agency’s promotional activities for its national roll-out, CDC developed press kits and other promotional materials, and to promote the program, it partnered with 14 public and private organizations, including Education, physician associations, and other organizations that conduct work in high school athletics or sports medicine. CDC also conducted a targeted media campaign consisting of e-mails and telephone calls to local, regional, and national media outlets, regional and national newspapers, and general and specialty magazines. In addition, the Surgeon General served as a key spokesperson and participated in radio interviews with program officials. CDC estimates that it distributed 20,000 tool kits within the first 3 months of the program and reached 6 million listeners and readers through the targeted media campaign. Agency officials estimate that CDC distributed more than 300,000 Heads Up: Concussion High School Sports materials overall by the end of December 2009. CDC has continued to update and expand its Heads Up: Concussion in High School Sports materials. CDC plans to release updated Heads Up: Concussion in High School Sports materials in spring 2010 to coincide with the release of free online training for high school coaches developed by CDC and NFHS, which will include downloadable Heads Up: Concussion materials and an educational video. CDC has also continued to expand its Heads Up programs to target broader audiences. In addition, CDC officials told us that the agency created sports-specific materials in conjunction with the national governing bodies for youth and high school football, lacrosse, and ice hockey based on the Heads Up: Concussion in High School Sports and other materials. The sports-specific materials include prevention and safety information related to each sport and its equipment. The agency plans to continue developing specific materials for additional sports. Other federal agencies administer programs related to concussion, but most of these programs are not directed specifically at the prevention of concussion in high school sports. CPSC carries out initiatives that include developing educational materials such as brochures and fact sheets. These initiatives are not targeted exclusively at high school sports, but are directed more broadly at sports and recreation safety for youth and adults. For example, CPSC developed a brochure on which helmets to wear for a variety of activities, such as football, baseball, and bicycling, to prevent head injuries, including concussion. HRSA and NIH administer grant programs related to concussion and brain injury from all causes and for all age groups. HRSA grants focus on high-risk groups including youth ages 15-19, and NIH grants have supported some research on concussion in high school sports. However, neither agency administers programs specifically for the prevention of concussion in high school sports. According to department officials, Education does not administer any programs related to the prevention of concussion. The three key state laws regarding the management of concussion that were identified by federal officials and experts all include requirements related to concussion education and athletes’ return to play. (See table 2.) The education components of the key state laws—those of Oregon, Texas, and Washington—vary in terms of targeted group and frequency of training. The return-to-play requirements of the key state laws vary with respect to the conditions under which the requirements apply and the personnel who may authorize return to play. All three state laws include requirements for education on concussion, but they vary in the groups targeted and the content and frequency of the education. The educational requirements of the Oregon law are targeted at coaches. In addition to coaches, the Texas law specifies that additional persons—such as athletic trainers, sponsors of extracurricular athletic activities, physicians who assist with activities, and athletes—also must complete an education program. The Washington law is the only one that requires that parents, in addition to coaches and athletes, receive education. The Oregon law is unique in that it requires that coaches receive education on concussion symptoms annually. The Texas and Washington laws are silent on how often coaches should complete such an education program. The Washington law is the only state law we examined that requires school districts to work with a state athletic organization to develop guidelines, forms, and educational materials. School districts in Washington worked with the Washington Interscholastic Activities Association (WIAA) to develop a document, which athletes and parents must sign annually, that contains information on the risks of concussion and on how to recognize the signs and symptoms of concussion. By signing the document, parents and athletes are acknowledging their understanding that the athlete will be removed from play or practice by the coach if he or she is suspected of having a concussion. WIAA also developed fact sheets and an educational video for coaches that describe the signs and symptoms of concussion and propose a management strategy for coaches to follow. Much of the information distributed by WIAA is modeled after CDC’s Heads Up: Concussion materials. The Texas law requires the Commissioner of Education to develop and adopt a safety training program, and the Texas Commissioner of Education adopted the extracurricular athletic activity safety training program provided by the University Interscholastic League (UIL). The UIL training manual includes a section on recognizing the signs of concussion and one on reducing head and neck injuries. The latter section states that an athlete with signs of head or neck trauma should receive immediate medical attention and not be allowed to return to play or practice without permission from proper medical authorities. UIL has also developed a parent information manual that includes a section on concussion signs and management. In addition, UIL has contracted with the Brain Injury Association of America to provide to schools and coaches 25,000 palm cards for the management of sports-related concussion, which outline the protocol that every school must follow when dealing with possible head injuries that occur in practice or play of all UIL activities. The Oregon law requires that the State Board of Education establish rules regarding the required concussion education for coaches. An official from the Oregon Department of Education told us that these rules have not yet been established, as the law first applies to the 2010-2011 school year. The return-to-play requirements of the key state laws vary with respect to the conditions under which the requirements apply. The return-to-play requirements of the Texas law apply only to athletes with injuries that result in a loss of consciousness and therefore exclude many concussions. In contrast, the return-to-play requirements of the Oregon and Washington laws apply to athletes with symptoms of or suspicion of concussion. While each state law requires that an athlete removed from play receive written permission from a health care professional before returning to play, the laws vary in the types of health professionals who can provide such permission. The Texas law requires clearance from a physician, and the Oregon law requires clearance from a health care professional. The Washington law requires that an athlete suspected of having a concussion be evaluated and cleared to return to play by a health professional specifically trained in the evaluation and management of concussion. WIAA’s Web site indicates that such professionals include medical doctors, doctors of osteopathy, advanced registered nurse practitioners, physicians’ assistants, and licensed certified athletic trainers. According to the WIAA Web site, the organization is considering whether other licensed health care providers have sufficient training to qualify them to authorize return to play. The Oregon law is the only one of the three we reviewed that specifically prohibits an athlete removed from play or practice from returning to play or practice on the same day. Federal officials and experts we spoke with identified five sets of voluntary nationwide guidelines that address the management of concussion in sports. (See table 3.) One set specifically targets high school sports, while the other four contain broad recommendations for the management of concussion in athletes of all ages. All five sets of guidelines contain similar recommendations for assessing concussion and managing the athlete, including making return-to-play decisions. For example, all sets of guidelines recommend that an athlete suspected of sustaining a concussion should be monitored closely on the sidelines following the injury and his or her cognitive function assessed at regular intervals for signs and symptoms of deterioration—such as fluctuating levels of consciousness, balance problems, headaches, or nausea. All sets of guidelines also recommend returning an athlete to play on a gradual basis, tailored to the individual athlete’s recovery and based on the athlete’s signs and symptoms and the results of various concussion assessment tools, such as tests of memory, cognition, balance, and physical exertion. The set of guidelines that specifically targets high school sports, which was developed by NFHS, recommends a gradual increase in mental activity appropriate to high school students, such as attending an abbreviated school day and engaging in short periods of reading. If the athlete remains symptom-free, this is to be followed by a gradual increase in low-impact physical activity once the athlete has returned to a full school day. In addition, this set of guidelines recommends that high school athletes playing high-risk or collision sports or having a history of previous concussions should undergo tests of cognition, memory, and balance prior to the start of season to serve as a baseline in case an injury occurs. Officials from three of the organizations that developed guidelines told us that their members received information about the guidelines in a variety of ways. For example, NFHS officials told us that the association sent its set of guidelines to its member high schools upon publication and planned to include information on the management of concussion in its sports rule books, which it publishes every year for 17 sports, beginning with the 2010-2011 school year. Officials from the American College of Sports Medicine and the National Athletic Trainers’ Association told us that concussion management is a frequent topic of discussion at their meetings and that their guidelines were also published in each organization’s respective journal. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or other members of the committee may have. For further information about this statement, please contact Linda T. Kohn at (202) 512-7114 or kohnl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement were Helene F. Toiv, Assistant Director; Kate Blackwell; George Bogart; Shana R. Deitch; Carolyn Feis Korman; and Roseanne Price. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Participation in school sports can benefit children but also carries a risk of injury, including concussion. Concussion is a brain injury that can affect memory, speech, and muscle coordination and can cause permanent disability or death. Concussion can be especially serious for children, who are more likely than adults both to sustain a concussion and to take longer to recover. These factors may affect return-to-play decisions, which determine when it is safe for an athlete to participate in sports again. GAO was asked to testify on concussion incurred in high school sports. This statement focuses on (1) what is known about the nationwide occurrence of concussion, (2) federal concussion prevention programs, (3) the components of key state laws related to the management of concussion, and (4) the recommendations of voluntary nationwide concussion management guidelines. To do this work, GAO conducted literature searches; reviewed injury databases, state laws, and documents from federal agencies and organizations that conduct work in high school athletics or sports medicine; and interviewed federal officials and experts who identified key state laws and nationwide guidelines and provided other information. GAO shared the information in this statement with the relevant federal agencies. GAO identified three national databases that, as part of broader data collection efforts, collect information on the occurrence of concussion in high school sports, but they do not provide an overall national estimate of occurrence. Although the High School Reporting Information Online database provides national estimates of occurrence of concussion, it covers only 20 sports for high schools with certified athletic trainers. It may underestimate occurrence because some athletes may be reluctant to report symptoms of a possible concussion to avoid being removed from a game. The Consumer Product Safety Commission's (CPSC) National Electronic Injury Surveillance System provides national estimates only on concussions treated in an emergency room. The National Center for Catastrophic Sports Injury Research database provides information only on cases of concussion with serious complications and cannot provide national estimates of the occurrence of all concussions. The Centers for Disease Control and Prevention's program, Heads Up: Concussion in High School Sports, which began in September 2005, is the primary federal prevention program directed toward concussion. In addition, CPSC carries out prevention initiatives that include distributing educational materials, but these initiatives are directed more broadly at sports and recreation safety, such as appropriate helmets for football, baseball, and bicycling. The three key laws regarding the management of concussion in high school sports that were identified by federal officials and experts--those of Oregon, Texas, and Washington--all address concussion education and return to play, but their specific requirements vary. The education requirements vary with respect to who is to receive the education. For example, the Washington law targets coaches, athletes, and parents, while the Oregon law targets coaches only. There is also variation with respect to the content and frequency of education. The return-to-play requirements vary in the conditions under which athletes may return to play and in who may authorize it. For example, the Texas requirements apply specifically to athletes who lose consciousness, which excludes many concussions, and the Washington law requires return-to-play authorizations to be made by health professionals specifically trained in the evaluation and management of concussion. GAO found five sets of voluntary nationwide guidelines, which were developed by organizations that conduct work in high school athletics or sports medicine, that address the management of concussion in high school sports. All recommend monitoring an athlete with a concussion on the sidelines and assessing cognitive function regularly for signs of deterioration. All guidelines also recommend returning an athlete to play on a gradual basis, tailored to an individual's recovery and based on symptoms and the results of memory, cognition, and balance tests
VA’s disability compensation program pays monthly cash benefits to eligible veterans who have service-connected disabilities resulting from injuries or diseases incurred or aggravated while on active military duty. The benefit amount is based on the veteran’s degree of disability, regardless of employment status or level of earnings. A veteran starts the claims process by submitting a disability compensation claim to one of the 57 regional offices administered by VBA (see fig. 1). In the average disability compensation claim, the veteran claims about five disabilities. For each claimed disability, the regional office adjudicator must develop evidence and determine whether each disability is connected to the veteran’s military service. The adjudicator then applies the medical criteria in VA’s Rating Schedule to evaluate the degree of disability caused by each service-connected disability, and then the adjudicator determines the veteran’s overall degree of service- connected disability. If a veteran disagrees with the adjudicator’s decision on any of the claimed disabilities, the veteran may file a Notice of Disagreement. If the regional office is unable to resolve the disagreement to the veteran’s satisfaction, the veteran may appeal to the Board. A veteran can dispute a decision not only if the regional office denies benefits by deciding that an impairment claimed by the veteran is not service-connected. Even for a claimed impairment found to be service-connected, the veteran may dispute the severity rating that the regional office assigns to the impairment and ask for an increase in the rating. During fiscal years 2003 and 2004, respectively, the regional offices made about 715, 000 and 598,500 decisions involving disability compensation claims. According to VBA, during fiscal years 2003 and 2004, respectively, veterans submitted Notices of Disagreement in about 13.4 and 14.5 percent of all decisions involving disability ratings, and of the veterans who filed Notices of Disagreement, about 34.9 and 44.4 percent went on to submit appeals to the Board. Assisted by 240 staff attorneys, the Board’s 52 veterans law judges decide veterans’ appeals on behalf of the Secretary. The Board has full de novo review authority and gives no deference to the regional office decision being appealed. The Board makes its decisions based on only the law, VA’s regulations, precedent decisions of the courts, and precedent opinions of VA’s General Counsel. During the appeals process, the veteran or the veteran’s representative may submit new evidence to the Board and request a hearing. In fiscal year 2004, for all VA programs, the Board decided about 38,400 appeals, of which about 94 percent (35,900) were appeals of disability compensation cases that contained an average of 2.2 contested issues per case. In any given case, the Board might grant the requested benefits for one issue but deny benefits for another. In some instances, the Board may find that a case is not ready for a final decision and return (or remand) the case to VBA for rework, such as obtaining additional evidence and reconsidering the veteran’s claim. If VBA still does not grant the requested benefits after obtaining the additional evidence, it returns the case to the Board for a final decision. Of the appeals involving compensation cases decided during fiscal year 2004, the Board reported that it granted requested benefits for at least one issue in about 18 percent of the cases, denied all requested benefits in about 23 percent of the cases, and remanded about 58 percent of the cases to VBA for rework. Effective February 22, 2002, VA issued a new regulation to streamline and expedite the appeals process. Previously, the Board had remanded all decisions needing rework directly to VBA’s regional offices. The new regulation, however, allowed the Board to obtain evidence, clarify evidence, cure a procedural defect, or perform almost any other action essential for a proper appellate decision without having to remand the appeal to the regional office. It also allowed the Board to consider additional evidence without having to refer the evidence to the regional office for initial consideration and without having to obtain the appellant’s waiver. According to the Board, this change in the process reduced the time required to provide a final decision to the veteran on an appeal, allowed regional offices to use more resources for processing initial claims rather than remands, and virtually eliminated multiple remands on the same case to the regional offices. However, in May 2003, the U.S. Court of Appeals for the Federal Circuit held that the Board could not, except in certain statutorily authorized exceptions, decide appeals in cases in which the Board had developed evidence. As a result, VA established a centralized Appeals Management Center within VBA in Washington, D.C., to take over evidence development and adjudication work on remands. If the Board denies requested benefits or grants less than the maximum benefit available under the law, veterans may appeal to the U. S. Court of Appeals for Veterans Claims, an independent federal court. Unlike the Board, the court may not receive new evidence. It considers only the Board’s decision; briefs submitted by the veteran and VA; oral arguments, if any; and the case record that VA considered and that the Board had available. In cases decided on merit (cases not dismissed on procedural grounds), the court may (1) reverse the Board’s decision (grant contested benefits), (2) affirm the Board’s decision (deny contested benefits) or (3) remand the case back to the Board for rework. Of the 3,489 cases decided on merit during fiscal years 2003-2004, the court reversed or remanded in part or in whole about 88 percent of the cases. Under certain circumstances, a veteran who disagrees with a decision of the court may appeal to the U.S. Court of Appeals for the Federal Circuit and then to the Supreme Court of the United States. The Board of Veterans’ Appeals has taken action to strengthen its internal system for reviewing the quality of its own decisions. Specifically, the Board has taken steps to improve its quality review system’s sampling methodology and to avoid obscuring serious errors by mixing them with less significant deficiencies. We found, however, that the Board still needs to revise its formula for calculating accuracy rates in order to avoid potentially misleading accuracy rates. During our 2002 evaluation, we reviewed the Board’s methods for selecting random samples of Board decisions and calculating accuracy rates for its decisions. We found that the number of decisions reviewed was sufficient to meet the Board’s goal for statistical precision in estimating its accuracy rate. However, we pointed out some Board practices that might result in misleading accuracy rates. These practices included not ensuring that decisions made near the end of the fiscal year were sampled and not properly weighting quality review results in the formula used to calculate accuracy rates. At the time of our 2002 report, the Board had agreed in principle to correct these practices. We found in our most recent work that the Board took corrective action in fiscal year 2002 to assure that decisions made near the end of the fiscal year were sampled. The quality review program now selects every 20th original decision made by the Board’s veterans law judges and every 10th decision they make on cases remanded by the court to the Board for rework. However, we found that the Board had not revised its formula for calculating accuracy rates in order to properly weight the quality review results for original decisions versus the results for decisions on remanded cases. We determined that, even if this methodological error had been corrected earlier, the accuracy rate reported by the Board for fiscal year 2004 (93 percent) would not have been materially different. However, to avoid the potential for reporting a misleading accuracy rate in the future, corrective action needs to be taken, and the Board agreed to correct this issue in the very near future. In our 2002 evaluation, we also found that the Board included nonsubstantive deficiencies (errors that would not be expected to result in either a remand by the court or a reversal by the court) in calculating its reported accuracy rates. We concluded that the reported accuracy rates understated the level of accuracy that would result if the Board, like VBA, counted only substantive deficiencies in the accuracy rate calculation. VBA had ceased counting nonsubstantive deficiencies in its error rate after the VA Claims Processing Task Force said in 2001 that mixing serious errors with less significant deficiencies could obscure what is of real concern. Similarly, we recommended that the Board’s accuracy rates take into account only those deficiencies that would be expected to result in a reversal or a remand by the court. In fiscal year 2002, the Board implemented our recommendation. Also, during the course of our 2002 evaluation of the quality review program, we brought to the Board’s attention the governmental internal control standard calling for separation of key duties and the governmental performance audit standard calling for organizational independence for agency employees who review and evaluate program performance. These issues arose because certain veterans law judges who were directly involved in deciding veterans’ appeals were also involved in reviewing the accuracy of such decisions. The Board took corrective actions during our review in May 2002 to resolve these issues so all quality reviews from which accuracy rates are determined are done by persons not directly involved in deciding veterans’ appeals. In 2002, we also found that the Board collected and analyzed issue-specific data on the reasons that the Court remanded decisions to the Board in order to provide feedback and training to the Board’s veterans law judges; however, the Board did not collect issue-specific data on the errors that its own quality reviewers found in decisions of the Board’s veterans law judges. We recommended that the Board revise its quality review program to begin collecting such issue-specific error data in order to identify training that could help improve decision quality. In April 2005, the Board said it did not implement this recommendation because it believes the benefits would be too limited to justify the substantial reprogramming of the data system that would be required to collect issue-specific data. The Board also pointed out that the issue-specific data captured for court remands have not proved to be as useful as it had expected in identifying ways to provide training that could reduce court remands. Adjudicator judgment is an inherent factor in deciding disability claims, and it introduces the potential for variation in the process. Part of assessing inconsistency, as we recommended in 2002, would include determining acceptable levels of variation for specific types of disabilities. In late 2004, in response to adverse media reports, VA initiated its first study of consistency. Such studies are the first step in determining the degree of variation that occurs and what levels of variation are acceptable. Adjudicators often must use judgment in making disability decisions. Judgment is particularly critical when the adjudicator must (1) assess the credibility of different sources of evidence; (2) evaluate how much weight to assign different sources of evidence; or (3) assess some disabilities, such as mental disorders, for which the disability standards are not entirely objective and require the use of professional judgment. In such cases, two adjudicators reviewing the same evidence might make differing judgments on the meaning of the evidence and reach different decisions, neither of which would necessarily be found in error by any of VA’s quality reviewers. For example, in an illustration provided by the Board, consider a disability claim that has two conflicting medical opinions, one provided by a medical specialist who reviewed the claim file but did not examine the veteran, and a second opinion provided by a medical generalist who reviewed the file and examined the veteran. One adjudicator could assign more weight to the specialist’s opinion, while another could assign more weight to the opinion of the generalist who examined the veteran. Depending on which medical opinion is given more weight, one adjudicator could grant the claim and the other could deny it. Yet, a third adjudicator could apply VA’s “benefit-of-the-doubt” rule and decide in favor of the veteran. Under this rule, if an adjudicator concludes that there is an approximate balance between the evidence for and the evidence against a veteran’s claim, the adjudicator must decide in favor of the veteran. In the design of their quality review systems, VBA and the Board acknowledge the fact that, in some cases, different adjudicators reviewing the same evidence can make differing, but reasonable, judgments on the meaning of the evidence. As a result, VBA and the Board instruct their quality reviewers that when they review a decision, they are not to record an error merely because they would have made a different decision than the one made by the adjudicator. VBA and the Board instruct their quality reviewers to not substitute their own judgment in place of the original adjudicator’s judgment if the adjudicator’s decision is adequately supported and reasonable. Another example provided by the Board demonstrates how adjudicators must make judgments about the degree of severity of a disability. VA’s disability criteria provide a formula for rating the severity of a veteran’s occupational and social impairment due to a variety of mental disorders. This formula is a nonquantitative, behaviorally oriented framework for guiding adjudicators in choosing which of the degrees of severity shown in table 1 best describes the claimant’s occupational and social impairment. Similarly, VA does not have objective criteria for rating the degree to which certain spinal impairments limit a claimant’s motion. The adjudicator must assess the evidence and decide whether the limitation of motion is “slight, moderate, or severe.” To assess the severity of incomplete paralysis, the adjudicator must decide whether the veteran’s paralysis is “mild, moderate, or severe.” The decision on which severity classification to assign to a claimant’s condition could vary in the minds of different adjudicators, depending on how they weigh the evidence and how they interpret the meaning the of the different severity classifications. Consequently, it would be unreasonable to expect that no decision-making variations would occur. But it is reasonable to expect the extent of variation to be confined within a range that knowledgeable professionals could agree is reasonable, recognizing that disability criteria are more objective for some disabilities than for others. For example, if two adjudicators were to review the same claim file for a veteran who has suffered the anatomical loss of both hands, VA’s disability criteria state unequivocally that the veteran is to be given a 100 percent disability rating. Therefore, no variation would be expected. However, if two adjudicators were to review the same claim file for a veteran with a mental disability, knowledgeable professionals might agree that it would not be out of the bounds of reasonableness if one adjudicator gave the claimant a 50 percent disability rating and the other adjudicator gave a 70 percent rating. However, knowledgeable professionals might also agree that it would be clearly outside the bounds of reasonableness if one adjudicator gave the claimant a 30 percent rating and the other, a 100 percent rating. Although the issue of decision-making consistency is not new, VA only recently began to study consistency issues. In a May 2000 testimony before the House Subcommittee on Oversight and Investigations, Committee on Veterans’ Affairs, we underscored the conclusion made by the National Academy of Public Administration in 1997 that VBA needed to study the consistency of decisions made by different regional offices, identify the degree of subjectivity expected for various medical issues, and then set consistency standards for those issues. In August 2002, we drew attention to the fact that there are wide disparities in state-to-state average compensation payments per disabled veteran, and we voiced the concern that such variation raises the question of whether similarly situated veterans who submit claims to different regional offices for similar conditions receive reasonably consistent decisions. In January 2003, we reported that concerns about consistency had contributed to GAO’s designation of the VA disability program as high-risk in 2003. Again, in November 2004, we highlighted the need for VA to develop plans for studying consistency issues. Most recently, in December 2004, the media drew attention to the wide variations in the average disability compensation payment per veteran in the 50 states and published data showing that the average payments varied from a low of $6,710 in Ohio to a high of $10,851 in New Mexico. Reacting to these media reports, in December 2004, the Secretary instructed the Inspector General to determine why average payments per veteran vary widely from state to state. As of February 2005 the Office of Inspector General planned to use data obtained from VBA for all regional offices to identify factors that may explain variations among the regional offices. In March 2005, VBA began a study of three disabilities believed to have potential for inconsistency: hearing loss, post-traumatic stress disorder, and knee conditions. VBA assigned 10 subject matter experts to review 1,750 regional office decisions and plans to complete its analysis of study data in mid-May 2005, develop a schedule for future studies of specific ratable conditions, and recommend a schedule for periodic follow-up studies of previously studied conditions. In our 2002 report, we recommended that VA establish a system to regularly assess and measure the degree of consistency across all levels of VA adjudication, including regional offices and the Board, for specific medical conditions that require adjudicators to make difficult judgments. For example, we said VA could create hypothetical claims for certain medical conditions, distribute the claims to multiple adjudicators at each decision-making level, and analyze variations in outcomes. Such a system would identify variation in decision making and provide a basis to identify ways, if considered necessary, to reduce variation through training or clarifying and strengthening regulations, procedures, and policies. Although VA agreed in principle with our recommendation and agreed that consistency is an important goal, it commented that it would promote consistency through training and communication. We support such efforts but still believe VA needs to directly evaluate and measure consistency across all levels of adjudication. Otherwise, VA cannot determine whether such training and other efforts are directed at the causes of inconsistency and whether such efforts actually improve consistency. In our November 2004 report, we found that VBA’s administrative data was insufficient to analyze inconsistency because we could not reliably use the data to identify decisions made after fiscal year 2000, identify the regional offices that made the original decisions, or determine service- connection denial rates for specific impairments. However, in October 2004, VBA completed its implementation of a new nationwide data system, known as Rating Board Automation (RBA) 2000. VA said this new system could reliably collect the types of data needed to perform the analyses we sought to do. Therefore, we recommended that the Secretary of Veterans Affairs develop a plan, and include it in VA’s annual performance plan, containing a detailed description of how VA intended to use data from the new RBA 2000 information system. We recommended that VA conduct systematic studies of the impairments for which RBA 2000 data reveal indications of decision-making inconsistencies among regional offices. VA concurred with our recommendation. Because the new RBA 2000 data system had been recently implemented, we acknowledged that VA could not implement such a plan until it accumulated a sufficiently large body of data under the new system. In our judgment, at least one year of data would be needed to begin such a study. While we believe the studies recently begun by the Office of Inspector General and VBA are positive steps forward in addressing consistency issues, the RBA 2000 data system, if found to be reliable, can provide VA with the data needed to proactively and systematically target specific impairments that have the widest variations in decision-making outcomes among the regional offices and focus VA’s efforts to study reasons for variations on those impairments. Building in such analytical capability to augment its quality assurance program would help enhance program integrity and better assure that veterans’ disability decisions are made fairly and equitably. Mr. Chairman, this concludes my remarks. I would be happy to answer any questions you or the members of the subcommittee may have. For further information, please contact Cynthia A. Bascetta at (202) 512- 7101. Also contributing to this statement were Irene Chu, Ira Spears, Martin Scire, and Tovah Rom. Veterans Benefits: VA Needs Plan for Assessing Consistency of Decisions. GAO-05-99. Washington, D.C.: November 19, 2004. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 2003. Veterans’ Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. Washington, D.C.: August 16, 2002. SSA and VA Disability Programs: Re-Examination of Disability Criteria Needed to Help Ensure Program Integrity. GAO-02-597. Washington, D.C.: August 9, 2002. Veterans Benefits Administration: Problems and Challenges Facing Disability Claims Processing. GAO/T-HEHS/AIMD-00-146. Washington, D.C.: May 18, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The House Subcommittee on Disability Assistance and Memorial Affairs asked GAO to update a 2002 study to determine what VA has done to (1) correct reported weaknesses in methods used by the Board to select decisions for quality review and calculate the accuracy rates reported by the Board and (2) address the potential for inconsistency in decision-making at all levels of adjudication in VA, including VA's 57 regional offices and the Board. GAO said in 2002 that VA had not studied consistency even though adjudicator judgment is inherently required in the decision-making process, and state-to-state variations in the average disability compensation payment per veteran raised questions about consistency. In January 2003, in part because of concerns about consistency, GAO designated VA's disability program as high-risk. The Department of Veterans Affairs (VA) has taken steps to respond to GAO's 2002 recommendations to correct weaknesses in the methods for selecting decisions by the Board of Veterans' Appeals (Board) for quality review and calculating the accuracy rates reported by the Board. Specifically, the Board now ensures that decisions made near the end of the fiscal year are included in the quality review sample, and the Board now excludes from its accuracy rate calculations any errors that do not have the potential for resulting in a reversal by or remand from the court. GAO found that the Board had not yet revised its formula for calculating accuracy rates in order to properly weight the quality review results for original Board decisions versus the results for Board decisions on cases remanded by the court. However, GAO believes correcting this calculation method will not materially affect the Board's reported accuracy rates. VA still lacks a systematic method for ensuring the consistency of decision-making within VA as a whole, but has begun efforts to understand why average compensation payments per veteran vary widely from state to state. These efforts include studies underway by VA's Office of Inspector General and the Veterans Benefits Administration, which oversees the operations of VA's regional offices. Some variation is expected since adjudicators often must use judgment in making disability decisions, but VA faces the challenge of determining whether the extent of variation is confined within a range that knowledgeable professionals could agree is reasonable.
The federal government has a long-standing policy to maximize contracting opportunities for small businesses. For example, the Business Opportunity Development Reform Act of 1988 amended the Small Business Act to establish an annual government-wide goal of awarding at least 20 percent of prime contract dollars to small businesses. The Small Business Reauthorization Act of 1997 further amended the Small Business Act to increase the goal to at least 23 percent. To help meet this goal, SBA annually establishes prime contract goals for various categories of small businesses for each federal agency. Although SBA is responsible for coordinating with executive branch agencies on this goal, agency heads are responsible for achieving small business goals for their agencies. As previously discussed, in 1978 Congress also established an OSDBU in each federal agency with procurement powers. These offices are intended to advocate for small businesses within the agencies and thus also work with agencies to achieve contracting goals. Duties of OSDBU directors and functions of OSDBUs. The Small Business Act, as amended, establishes a number of requirements related to the functions and duties of OSDBUs. For instance, section 15(k)(3) generally establishes a direct reporting relationship between the OSDBU director and the agency head or deputy head. Other requirements in the act specify that the director must have supervisory authority over staff performing certain duties, implement and execute the functions and duties under the relevant sections of the Small Business Act, and identify proposed solicitations that involve the bundling of contract requirements. The National Defense Authorization Act for Fiscal Year 2013 further amended the Small Business Act to include provisions that specify minimum seniority and pay levels for OSDBU directors, require that their performance appraisals be signed by the agency head or deputy head, and require that the OSDBU director have certain prior experience. Section 15(k)(2) specifies that the OSDBU director must be appointed to a position in the Senior Executive Service (SES), or in certain cases be compensated at not less than the minimum rate of basic pay for grade GS-15 of the General Schedule. Section 15(k)(3) specifies that the agency head or deputy head generally be solely responsible for the performance appraisals of the director. The act also added a number of additional requirements to the duties of the OSDBU director, and specified that the director not hold any other title, position, or responsibility, except as necessary to carry out the responsibilities of the OSDBU as described in section 15(k). Some of the required functions of the OSDBU director in section 15(k) of the Small Business Act include the following: identifying proposed solicitations that involve significant bundling of contract requirements; working with agency acquisition officials, where appropriate, to revise such proposed solicitations to increase the probability of participation by a small business; assisting small businesses in obtaining payments from an agency (or prime contractor) with which they have contracted; assigning a small business technical adviser to each office with an SBA-appointed procurement center representative (an SBA staff member assigned to federal agencies or contract administration offices and who carries out SBA policies and programs); and providing the chief acquisition officer and senior procurement executive of the agency with advice and comments on acquisition strategies, market research, and justifications related to certain provisions of the act. To carry out the functions listed in section 15(k) of the Small Business Act, OSDBU directors provide advice on small business matters and collaborate with the small business community. Some of the primary duties of OSDBU directors include advising agency leadership on small business matters; providing direction for developing and implementing policies and initiatives to help ensure that small businesses have the opportunity to compete for and receive a fair share of agency procurement; providing agency acquisition and program personnel with leadership and oversight of education and training related to small business contracting; conducting reviews of small business programs; and serving as the agency liaison to SBA, including providing annual reports on agency activities, performance, and efforts to improve performance. Reviews by Small Business Procurement Advisory Council. The National Defense Authorization Act for Fiscal Year 2013 amended the statutory requirements of 15 U.S.C. § 644a to require the interagency council known as the Small Business Procurement Advisory Council (SBPAC) to annually review each OSDBU to determine compliance with section 15(k) requirements. These reviews are used to help determine SBA’s annual scorecard grade for each agency. SBPAC must report the results to the House and Senate Small Business committees. SBA chairs the council, which serves to assist agencies in their acquisition planning efforts. The council is also directed to identify best practices for maximizing small business utilization in federal contracting. Roles of other procurement officials. OSDBU directors are not the only officials responsible for helping small businesses participate in federal procurement. At the agency level, the heads of procurement departments (sometimes with a title of senior procurement executive) are responsible for implementing the small business programs at their agencies, including achieving program goals. Generally, staff in agency procurement departments assigned to work on small business issues (small business specialists) coordinate with OSDBU directors on their agencies’ small business programs. Chief acquisition officers provide a focal point for acquisition in agency operations. Key functions of the chief acquisition officers include monitoring and evaluating agency acquisition activities, increasing the use of full and open competition, increasing performance-based contracting, making acquisition decisions, managing agency acquisition policy, acquisition career management, acquisition resources planning, and conducting acquisition assessments. Of the five director-related requirements we reviewed, the level of demonstrated compliance varied, but was not universal for any one requirement (see table 1): Four of the 10 agencies we reviewed for the requirement that the director report to the head or deputy head of the agency did not demonstrate compliance. Twenty-three of the 24 agencies we surveyed demonstrated compliance for two requirements on director experience and supervisory duties. Nineteen of 24 agencies demonstrated compliance with a requirement for collateral duties of the director. Eighteen of 24 agencies demonstrated compliance with a requirement for the director’s compensation and seniority. Appendix II discusses the section 15(k) requirements included in this review. See appendixes III–XXVII for our overall determinations of demonstrated compliance with section 15(k) requirements and determinations of demonstrated compliance at each agency. Section 15(k)(3) generally requires that the OSDBU director report directly to and be responsible only to the agency head or the deputy head. Documentation we reviewed and discussions we held at 6 of the 10 agencies indicated that the directors reported as required. Other documents such as organizational charts, position descriptions and performance appraisals from the Departments of the Air Force, Army, Labor, Navy, and State and from the National Aeronautics and Space Administration (NASA) support this finding. The remaining four OSDBU directors reported to officials at lower levels than the agency head or deputy head. Three of the four—at the Departments of Energy and Veterans Affairs (VA) and the Social Security Administration (SSA)—reported to chiefs of staff. At the Department of Education, the OSDBU director reported to the senior policy adviser. Energy. The position description states that the director reports directly to the secretary. However, the performance appraisals we reviewed were signed by the deputy chief of staff and the chief of staff. An Energy official stated that the secretary provides input into the OSDBU director’s rating. VA. Based on documentation we received from and discussions we held with VA, the OSDBU director reports to the chief of staff, who signs the performance appraisals. An OSDBU official told us that if a matter required the attention of the secretary, the director would first advise the chief of staff. However, the official stated that the director has all the access to the secretary and deputy secretary that he needs. SSA. The organizational chart and performance appraisal show that the OSDBU director reports to the chief of staff. Agency officials explained that, since 2013, the commissioner has delegated this duty to the deputy commissioner. As of May 10, 2017, the agency did not have a deputy commissioner because the deputy commissioner was serving as the acting commissioner. In the absence of a deputy commissioner, the responsibilities were transferred to the chief of staff. Education. The performance appraisal indicates that the OSDBU director reports to the senior policy adviser. An OSDBU official explained that, currently, the agency does not have a deputy secretary and that this duty had been delegated to the senior policy adviser. The official also stated that, in the past, the director typically met with the deputy secretary on a monthly basis and provided an update to the deputy secretary on small business activities. Levels of demonstrated compliance with three other requirements for directors (experience, supervisory duties, and collateral duties) varied somewhat by requirement at the 24 agencies we surveyed, but were high across all three requirements. Director experience. Section 15(k) requires that the OSDBU director have prior experience from among a number of enumerated roles (such as federal contracting officer, program manager for a federal acquisition program, or attorney specializing in federal procurement law). Based on their survey responses and follow-up discussion with officials at the agencies, 23 of the 24 agencies demonstrated compliance with the requirement relating to prior experience of OSDBU directors (examples follow). State. According to the survey response, the OSDBU director has prior experience in program management for a federal acquisition program and also was a small business technical adviser and small business liaison officer. Education. The survey response from the Department of Education indicated the director had prior experience as a federal contracting officer, contracts administrator, and federal small business contracts manager. Interior. Survey responses from the Department of the Interior indicated the director had prior experience as a contracts administrator and small business liaison officer. The OSDBU director at the Department of Housing and Urban Development (HUD) did not demonstrate compliance. While the OSDBU director had experience in related jobs, the director had not had any of the jobs specifically identified in section 15(k). The survey response cited the director’s prior experience as the deputy assistant secretary for operations and management and deputy chief human capital officer. The director also served in several positions in which contracting was a responsibility. An OSDBU official stated that the director has had a long work history in a variety of jobs and felt that the director was prepared for the role of the OSDBU director. Supervisory duties. Section 15(k)(7) requires that the OSDBU director have supervisory authority over agency personnel to the extent that the functions and duties of such personnel are related to the functions or duties implemented and executed by the OSDBU. Based on the survey responses, follow-up with agency officials, and review of position descriptions, 23 of 24 agencies demonstrated compliance with the requirement relating to supervisory duties of the OSDBU director. For instance, at the Departments of the Army and Homeland Security, the position descriptions and survey responses we reviewed indicated that the directors have such supervisory authority over agency personnel (to the extent that the functions and duties of the personnel relate to sections 8 and 15 of the Small Business Act). At the Department of Justice, the survey response indicated that the OSDBU director oversees OSDBU staff and has supervisory authority over small business specialists whose duties are related to sections 8 and 15 of the Small Business Act. Also, in a follow-up discussion, an OSDBU official told us that the director works regularly with each small business specialist in executing the department’s small business programs. In contrast, the Defense Logistics Agency (DLA) did not demonstrate compliance with the requirement relating to supervisory duties. An official in the OSDBU explained that, while the director appointed small business associates to work in the field, the director does not directly supervise field staff. The office provides policy and program oversight, and the field staff report to deputy commanders at their sites. Collateral duties. Section 15(k)(15) requires that the person in the position of the director exclusively carry out the duties enumerated in Small Business Act and that the OSDBU director not hold any other title, position, or responsibility, except as necessary to carry out responsibilities under this subsection. Based on the survey responses and follow-up interviews with officials at selected agencies, we determined that 19 of 24 agencies demonstrated compliance with this requirement. The remaining five OSDBU directors or acting directors at the time of our review at the U.S. Department of Agriculture (USDA), Department of Labor, Environmental Protection Agency (EPA), SSA, and U.S. Agency for International Development (USAID) indicated they had collateral responsibilities. USDA. An OSDBU official indicated that the acting director holds another position as the acting assistant secretary for administration. The official indicated that the acting director spends 40–60 hours a week on duties related to the acting assistant secretary position. The official explained that the agency has had an acting OSDBU director since January 2017, when the prior OSDBU director’s position as a political appointee ended. The official stated that past OSDBU directors worked exclusively as OSDBU directors and that, when the permanent OSDBU director is appointed, that person will not have other duties. The official stated that he does not know when the director position will be filled. He said that the secretary of USDA, who was recently confirmed, would likely make the political appointment. Labor. Based on information we received from and discussions we held with the Department of Labor, the acting OSDBU director holds other positions and titles, including assistant secretary of management and administration and chief acquisition officer. According to a March 2010 department order, the goal of realigning the agency’s small business-related functions under the assistant secretary for administration and management is to better integrate small business outreach and small business procurement within the overall procurement function of the department. The assistant secretary for administration and management was appointed to serve as the OSDBU director. EPA. According to an agency official, the director also oversees two EPA-wide programs: (1) the Disadvantaged Business Enterprise Program, which is designed to increase the use of such businesses in procurements funded under EPA’s financial assistance agreements; and (2) the Asbestos and Small Business Ombudsman Program, which advocates for small businesses on regulatory and environmental compliance issues. The official stated that the intent is to increase cost efficiencies and effectiveness in overlapping functions directed at small businesses. The official also stated that sharing resources to accomplish these complementary agendas makes sense for the agency. Further, the official said that the director helps to provide administrative support to the procurement manager for the Disadvantaged Business Enterprise Program. Finally, the Clean Air Act of 1990 requires the OSDBU, through the program ombudsman, to monitor activities for the Asbestos and Small Business Ombudsman Program. SSA. The survey response and follow-up discussion indicated that the OSDBU director held a collateral position as agency coordinator for the Electronic Subcontracting Reporting System. SSA officials explained that, originally, the subcontracting reporting was manually collected. When the reporting went to an electronic collection system, the OSDBU director was named as the agency contact. This resulted in the OSDBU accepting responsibility for this reporting system, which contracting officers use. An OSDBU official stated that the time the director spends on this activity is minimal—occasionally addressing a few e-mails. The official said the system is critical to the director’s role, but the coordination work is not critical. The OSDBU official does not view this to be a collateral duty. USAID. The survey response stated that the OSDBU director oversees the Minority Servicing Institutions Program. In a follow-up meeting, an OSDBU official told us that this activity was not essential to carrying out OSDBU duties. The program involves both advocacy and outreach and has a full-time coordinator. The OSDBU official stated that during most weeks the director spends less than 2–3 hours on activities related to the Minority Servicing Program but on occasion may spend more time on such work. The official added that the OSDBU director overseeing this responsibility made sense from an agency perspective because the OSDBU’s role includes promoting and assisting disadvantaged businesses. Section 15(k)(2) generally requires that the OSDBU director be appointed by the agency head to an SES position. However, in cases in which the positions of chief acquisition officer and senior procurement executive at an agency are not SES positions, the OSDBU director may be appointed to a position compensated at not less than the minimum rate of basic pay for grade 15 of the General Schedule. Survey responses and agency documents we reviewed at the 24 agencies show that the positions of most permanent OSDBU directors (18 of 24) were at the SES level. However, 6 OSDBU directors—at USDA, the Departments of Commerce and Labor, DLA, the Office of Personnel Management (OPM), and SSA—held positions at other levels, such as at GS-15, while the chief acquisition officers or senior procurement executives in these agencies held SES-level positions or were executive schedule political appointees who were not in SES positions. USDA. At the time of our review, the current OSDBU director was an acting director. OSDBU officials at USDA explained that, historically, the permanent director was a political appointee holding an SES position. However, the prior director was a political appointee holding a GS-15 position. The officials explained that the position was temporary (6 months) and that it would have been difficult to fill the position with a member of the SES on a short-term basis. According to the survey, both the senior procurement executive and the chief acquisition officer held SES positions. Commerce. Officials told us that they have been discussing the possibility of converting the OSDBU director position to an SES position and were discussing the process for securing additional resources for this conversion. Labor. At the time of our review, Labor had an acting director who held an SES position. We assessed compliance based on the immediate prior permanent OSDBU director. An official from the Department of Labor stated in an e-mail that the prior permanent OSDBU director was a presidentially appointed, Senate-confirmed position compensated under the executive schedule. DLA. The survey response and the position description indicated that the director held a General Schedule position (GS-15). According to the survey, the chief acquisition officer and senior procurement executive are SES positions. In a follow-up meeting, agency officials stated that the agency has requested that DOD seek congressional approval to add an SES position for the OSDBU director. They stated that the agency has been waiting for authorization to make this change. OPM. An OSDBU official told us that, since the enactment of this requirement in 2013, the director has held a GS-15 position. The agency has been considering making this an SES position, but the official explained that, until the agency administrator position was filled, the agency would not act on this matter. SSA. Officials said that, although the law requires the OSDBU director to be an equivalent position to the senior procurement executive, they believe this does not work at SSA. They explained that SSA is a small agency in terms of acquisitions. They further stated that GS-15 is the level appropriate for the OSDBU director. Ongoing demonstration of noncompliance with section 15(k) OSDBU director-related requirements, described in this section of the report, potentially undermines the intent of the act. Reporting to lower levels of management may result in OSDBU directors not having direct access to top agency management, which may limit their influence. Collateral duties may take time away from the critical functions within section 15(k) duties. While some OSDBU officials believed that these collateral duties are minimal or are appropriate, the agencies have not reported their concerns to Congress. Also, if an OSDBU director holds a General Schedule position and the agency’s chief acquisition officer and senior procurement executive are SES positions, the OSDBU director’s ability to effectively advocate for small businesses may be affected. Levels of demonstrated compliance were high for five of eight functional requirements, but were much lower for the remaining three requirements (see table 2). All 24 agencies demonstrated compliance with three section 15(k) areas related to providing advice to officials, providing training to small businesses or acquisition personnel, and receiving unsolicited proposals and forwarding them when appropriate. Twenty-three of the 24 agencies demonstrated compliance related to identifying and addressing significant bundling of contract requirements and providing payment assistance to small businesses. Ten and eight of the 24 agencies, respectively, did not demonstrate compliance with requirements for assigning small business technical advisers and providing advice on proposed in-sourcing decisions. Nine of the 24 agencies did not demonstrate compliance with the requirement to respond to notifications of undue restrictions on the ability of small businesses to compete. Our review of survey responses, agency documents, and interviews with agency officials showed that all 24 agency OSDBUs demonstrated compliance in three areas—providing advice to officials, providing training to small businesses or acquisition personnel, and forwarding unsolicited proposals. Demonstrated compliance with section 15(k) Provide training to small businesses. Section 15(k)(13) states that an officer or employee of the OSDBU may provide training to small businesses and contract specialists. Each of the 24 agencies demonstrated compliance with the requirement. Agencies provided training to contracting personnel, small businesses, or both. The scope of the training varied from presentations to staff on the fundamentals of small business contracting and SBA’s socioeconomic contracting goals to one-on-one coaching with small businesses. One OSDBU official told us that the agency provided regular, ongoing, ad-hoc small business training to acquisition and program staff on an almost daily basis as part of its work. Another OSDBU official told us they provided both in-person and virtual training to small businesses and contracting specialists. An additional OSDBU official told us that the OSDBU manages an annual training program that consists of 18 to 25 individual training courses which cover a wide variety of issues from specific small business programs to using third-party contracts. Receive unsolicited proposals and forward them when appropriate. Section 15(k)(14) requires OSDBUs to receive unsolicited proposals and to forward them, when appropriate, to personnel of the activity responsible for reviewing such proposals. All 24 OSDBUs indicated that, if the office were to receive an unsolicited proposal, they would forward it to appropriate agency personnel. Some OSDBUs indicated that it was a rare occurrence for the OSDBU office to receive an unsolicited proposal. Several agencies, including NASA and HUD, had publicly available procedures and guidelines for submitting an unsolicited proposal to the appropriate office. Based on our review of survey responses, policy documents, and interviews with agency officials, 23 of the 24 agencies demonstrated compliance with identifying and addressing significant bundling of contract requirements and with providing payment assistance to small businesses. Identify and address bundling of contract requirements. Section 15(k)(5) requires OSDBUs to identify proposed solicitations that involve significant bundling of contract requirements and, where appropriate, to work with agency officials to mitigate the effects to small businesses. Twenty-three of 24 OSDBUs demonstrated compliance with this requirement. Most agencies indicated they had policies for the OSDBU to review proposed solicitations and many cited having a certain dollar threshold to review for any bundling. For example, OPM’s survey response and submitted policy documentation indicated that small business specialists review all proposed procurements over $150,000. One agency did not demonstrate compliance. Within the Department of Defense – Office of the Secretary, the OSDBU director has an oversight role in relation to identifying proposed bundling, rather than an implementation role. According to OSDBU officials, the Office of the Secretary’s small business staff work with contracting officers to review proposed acquisitions and mitigate the effects of bundling. However, due to the size and decentralized nature of contracting at the Office of the Secretary, these personnel are not part of the OSDBU. Provide assistance on payments. Section 15(k)(6) requires OSDBUs to assist small businesses in obtaining payments, required late payment interest penalties, or information on payments. Twenty-three of 24 OSDBUs helped small businesses seeking assistance with payments, whether from the agency or from a prime contractor (when the small business is a subcontractor). The types and scope of payment assistance varied, but most agencies had policies to address small business payments. For example, the survey response from the Department of Justice indicated that the agency has a policy to pay small businesses within 15 days of receiving an invoice. An agency official thought this policy greatly reduced the requests for assistance the OSDBU received, but the official noted that, should a small business ask for assistance, the OSDBU would provide assistance as needed. In contrast, SSA did not demonstrate compliance. Officials we interviewed said the OSDBU only assists in limited instances involving a small business seeking help with payment issues, usually by referring the business to the contracting office. The officials said that the OSDBU generally does not get involved when a small business seeks payment from a prime contractor. In reviewing information from our survey, follow-up questions, interviews, and policy documentation, we determined that 10 of the 24 agencies did not demonstrate compliance with the requirement for assigning small business technical advisers, and 8 of the 24 agencies did not demonstrate compliance with providing advice on proposed in-sourcing decisions. Nine of the 24 agencies did not demonstrate compliance with the requirement to respond to notifications of an undue restriction on the ability of small businesses to compete. Assign small business technical advisers. Section 15(k)(8) requires the OSDBU director to assign a small business technical adviser to each office in which SBA has assigned a procurement center representative. Fourteen of 24 agencies demonstrated compliance with this requirement, while 10 did not. OSDBU officials frequently cited organizational structure as a barrier to assigning technical advisers. However, we determined that they did not demonstrate compliance, illustrated in the following examples. State. An agency official stated that the OSDBU director does assign small business technical advisers to offices with an SBA procurement center representative; however, the technical advisers are not full-time employees of the procuring activity and are only assigned to work with the procurement center representative as required. There is 1 procurement center representative assigned to cover all 46 bureaus at the Department of State. Small business technical advisers within the OSDBU are assigned to work with the bureaus based on need. All technical advisers are full-time employees in the OSDBU and have at least 8 years of experience. SSA. Agency officials commented that the statute was intended for agencies with larger acquisition operations with multiple acquisition offices. SSA is a smaller procurement agency (one acquisition office) and does not assign a technical adviser. The officials stated that the agency does have an adviser position (which is termed the Small and Disadvantaged Business Utilization Specialist), but that position is managed by another office. Army and Navy. The Departments of the Army and Navy told us that their OSDBU directors did not assign small business technical advisers because the technical advisers were hired by and reported to the head of the contracting activity at the procurement center in which they were located. At Navy, an agency official stated that law and regulation disagree on this requirement. The Defense Federal Acquisition Regulation delegated the responsibility to hire technical advisers to the head of contracting. An agency official stated that this is an effective way to implement the Small Business Act. However, when statutory provisions, such as section 15(k), conflict with regulations, such as the acquisition regulation, the statute controls. Advise on in-sourcing. Section 15(k)(11) requires the OSDBU director to review and advise the agency on any decision to convert an activity performed by a small business to an activity performed by a federal employee (known as in-sourcing). Based on the survey responses and interviews, we determined that 8 agencies did not demonstrate compliance with this requirement. Agencies not demonstrating compliance typically said that OSDBUs did not have a role in reviewing every decision to in-source an activity but that the office might be consulted in some cases, as shown in the following examples. HUD. An OSDBU official told us that in-sourcing was generally considered a business decision and carried out by the Office of the Chief Procurement Officer, and that the OSDBU was not consulted. SSA. The officials told us that the budget office handles in-sourcing conversions, but that the budget office might contact the OSDBU regarding in-sourcing on an informal basis. Respond to notification of an undue restriction on ability of small business to compete. Section 15(k)(17) requires that, when notified by a small business (before contract award) that the small business believes that a solicitation, request for proposal, or request for quotation unduly restricts the ability of the small business to compete for the award, the OSDBU director must (1) submit the notice to the contracting officer, and if necessary, recommend ways to increase the opportunity for competition; (2) inform the agency’s advocate for competition; and (3) ensure that the small business is aware of other resources and processes available to address unduly restrictive provisions. Nine of the agencies did not demonstrate compliance with all of the required steps. OSDBU officials from the nine agencies not demonstrating compliance told us that they would carry out two of the three required follow-up actions. For instance, OSDBU officials at some agencies told us that, after receiving a notification, the directors would discuss the issue with the contracting officer working on the solicitation or proposal and that they also would ensure the small business was aware of resources to address the issue, but they would not consistently notify the agency’s advocate for competition (examples follow). NASA, USDA, and DLA. The OSDBU officials from these agencies indicated that the goal was to resolve the competition issue at the lowest level possible, meaning directly with the contracting officer, and thus not inform the agency’s competition advocate. Additionally, the officials thought that it was more efficient to work to resolve issues at the lowest levels rather than notifying the agency advocate for competition. At USDA, OSDBU officials indicated that, if direct resolution with the contracting officer was unsuccessful, the situation could be elevated, possibly to the level of advocate for competition. Additionally, an OSDBU official at NASA said it was rare to receive notifications of this type because the agency proactively makes solicitations work for small businesses. USAID. The survey response indicated that the OSDBU does not inform the agency’s advocate for competition of the notice. An agency official told us that the office goes directly to the procurement officer and ombudsman to resolve competition issues. In addition to survey responses about individual functions, a few agencies commented on how staffing levels affected their ability to fully carry out section 15(k) functions. The Department of Commerce said the OSDBU was significantly affected by low staffing levels, limiting it in efforts such as creating and updating small business contracting policies, reviewing broader acquisition policies that may affect small businesses, and conducting training for small businesses. SSA stated that staffing levels prevented the OSDBU from attending outreach events and meeting individually with small business owners due to scheduling conflicts. Additionally, the Air Force indicated that its OSDBU could do more, or have a more robust program implementation, if it had additional staff. For the agencies we identified as not having demonstrated compliance with certain OSDBU function requirements (such as assigning small business technical advisers or responding to notifications of an undue restriction on competition), some agencies felt that their existing organization structure was a barrier to carrying out an activity or that their goal was to resolve issues at the lowest levels possible without notifying the agency advocate for competition. But continued demonstration of noncompliance with these requirements may undermine the intent of the provisions and may limit the extent to which OSDBUs can advocate for small businesses. Additionally, by not having a role in carrying out certain section 15(k) requirements, OSDBUs may be unaware of small business matters that might require further attention. In addition to the 13 requirements relating to OSDBU directors or to OSDBU functions, we reviewed an additional requirement. Section 15(k)(16) requires that each fiscal year the OSDBU director submit a report to the House Committee on Small Business and the Senate Committee on Small Business and Entrepreneurship describing training provided and training and travel expenditures in the past fiscal year. Most OSDBUs (22 of 24) told us they did not submit these reports to Congress or SBA in past years. Some of the agencies told us in interviews that they had not submitted this report because SBA or the committees had not provided guidance on how to do so. Other agencies indicated they were unaware of this requirement. Two agencies (the Department of Transportation and USAID) indicated that they had submitted fiscal year 2014 and 2015 reports to Congress, and they provided us with copies. During the course of our review, SBA established new procedures to collect training and travel reports from all the OSDBUs. According to SBA, all of the 24 agencies submitted their fiscal year 2016 reports to SBA, which SBA compiled and submitted as a consolidated report to Congress in June 2017. We found that some SBPAC peer review scores were inconsistent with our demonstrated compliance determinations (for the section 15(k) requirements we both considered). As required, the SBPAC peer review panel annually conducts reviews of each OSDBU to determine compliance with section 15(k) requirements. The review assesses an agency’s progress plan by considering seven success factors for achievement of and commitment to small business contracting. SBPAC must report the results to the House and Senate Small Business committees. The peer reviews are a form of internal control that is intended to provide some assurance that OSDBUs comply with section 15(k) requirements. More specifically, for several agencies, our compliance determinations for the section 15(k) requirements related to organizational structure did not align with SBPAC’s fiscal year 2016 scores for the “OSDBU organization” success factor (see table 3). For example, the Department of Labor’s OSDBU organization score was 0.9 (classified by SBA as above average), but we determined that the agency did not demonstrate compliance with two of five section 15(k) requirements related to organizational structure. Similarly, SSA received an OSDBU organization score of 0.8 (satisfactory), but we determined that it had not demonstrated compliance with four of five section 15(k) requirements related to organizational structure. Table 3 provides additional examples of inconsistencies between the SBPAC scores and our determinations. In addition, agencies in the most recent review received overall scores (across the seven success factors) of 94–98 percent. For each success factor, agencies provide SBPAC reviewers with a brief narrative explaining their efforts and can (but are not required to) submit up to three supporting documents. The resulting assessment scores are then used in developing SBA’s annual scorecard grade for each agency. According to federal standards for internal control, management should use quality information to make informed decisions and evaluate an entity’s performance in achieving key objectives. Additionally, these standards state that management should design control activities to achieve objectives. For example, SBA provides guidance to peer reviewers and agencies that lists two examples of documentation that agencies may submit to support their compliance with the five 15(k) sections included under the success factor for OSDBU organization. Section 15(k)(3) requires that the OSDBU director’s performance appraisal be signed by the agency head, deputy head, or, in the case of DOD, the secretary or secretary’s designee. But the only two examples of documentation that are included in the SBA guidance are an organizational chart and an employee job description. Reviewing these two types of documents may allow for some determination of the reporting chain at an agency, but it would not allow for a determination of whether the required official signed a director’s performance appraisal. During our review, we requested these documents and also a copy of the performance appraisal from each agency to support our determination of whether the OSDBU director reported directly to the agency head or deputy head as generally required in section 15(k)(3). Additionally, we discussed the section 15(k) requirements with agency staff to clarify information and assess the extent to which the OSDBU met this requirement. Other than reviewing the documentation provided by agencies, SBA’s guidance for the peer review panel does not indicate any other means by which peer reviewers could obtain or clarify information. SBA officials told us they rely on members of SBPAC to oversee the review of their peers and determine thresholds of evidence, and also on the agencies to provide information in good faith. As a result of this approach and as differences between the peer review scores and our compliance determinations suggest, SBPAC scores may not accurately reflect an agency’s compliance with section 15(k) requirements. Planned changes to scoring in the peer review process also may affect the reliability of the scores and information reported about agency achievements in small business contracting. SBA has been updating its SBPAC peer review process and also plans to update its scoring methodology. SBA officials told us the updates to the peer review process will result in an expanded review that addresses 18, and possibly as many as 21, requirements of section 15(k). Officials said they expect that updated peer review panel guidance will be finalized later this year for use with the fiscal year 2017 scorecard and peer review. Preliminary information that SBA provided in a description of changes to the fiscal year 2017 scorecard suggests that the new review process will be similar to the current process. In response to a requirement in the National Defense Authorization Act for Fiscal Year 2016, SBA also will change its scorecard methodology for fiscal year 2017. The provision specifies that an agency’s performance towards its prime contracting goals will account for 50 percent of an agency’s grade (versus 80 percent in the current formulation). The remaining 50 percent is to be weighted in a manner determined by the administrator of SBA based on certain legal requirements. SBA has preliminarily determined that the remaining 50 percent will be allocated as follows: 20 percent for the results of the peer review of section 15(k) requirements (versus the current 10 percent), 20 percent for subcontracting, and 10 percent for a comparison of awarded contracts on a year-over-year basis. Applying similar standards to an updated peer review process that is weighted more heavily in calculating agencies’ overall SBA scorecard grades could result in a greater lack of reliability in these scorecard grades and information reported to Congress. And without reliable information from the SBPAC peer review, Congress’s ability to oversee federal advocacy for small businesses through OSDBUs may be hindered. When we spoke to SBA officials about the differences between the results of the peer review and our review of compliance with section 15(k) requirements, the officials indicated that they have been developing additional guidance and were considering increasing the threshold of evidence used in the peer review but had no firm plans to do so. Agencies generally demonstrated high levels of compliance with some section 15(k) requirements but less so for others. For a few section 15(k) requirements for which agencies did not demonstrate compliance, staff at some agencies explained that their agencies had carried out the required activities outside of the OSDBU or by using different processes than specified in the requirements. In a few instances, some staff thought that the differing processes were more efficient for their agency. We did not assess whether these different approaches facilitated the execution of required activities, but focused on whether agencies demonstrated compliance with the requirements as described in section 15(k). Continued demonstrated noncompliance with these requirements may undermine the intent of the provisions and may limit the extent to which OSDBUs can advocate for small businesses. If agencies still believe that their procedures for certain activities are sufficient to advocate for small business contracts, at a minimum agencies have the obligation to explain their noncompliance to Congress and provide support for their views, including requesting any statutory flexibilities to permit exceptions as appropriate. With SBPAC reviews potentially constituting 20 percent of agency overall SBA scorecard grades under the revised process, the reliability of the SBPAC peer review takes on greater importance. However, the results of our review often diverged from SBPAC’s in areas that overlapped (our review also included section 15(k) requirements that are not part of the peer review). The divergence in results suggests that the process could be enhanced. For instance, current SBA guidance is limited in describing procedures and methods for the peer review. Enhancing SBA’s peer review guidance can help increase the reliability of the peer review compliance determinations and provide more consistency with federal internal control standards. We are making the following 20 recommendations: To address demonstrated noncompliance with section 15(k) of the Small Business Act, as amended, we are making recommendations to the heads of 19 agencies. The Director of the Defense Logistics Agency should comply with sections 15(k)(2), (k)(7), (k)(11), and (k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of Agriculture should comply with sections 15(k)(2), (k)(15), and (k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of the Army should comply with section 15(k)(8) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of Commerce should comply with sections 15(k)(2), (k)(8), (k)(11), and (k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of Defense should comply with sections 15(k)(5) and (k)(8) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of Education should comply with sections 15(k)(3) and (k)(11) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of Energy should comply with sections 15(k)(3), (k)(8), and (k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of HUD should comply with sections 15(k) and (k)(11) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of the Interior should comply with sections 15(k)(11) and (k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of Labor should comply with sections 15(k)(2) and (k)(15) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of the Navy should comply with section 15(k)(8) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of State should comply with sections 15(k)(8) and (k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of the Treasury should comply with sections 15(k)(8) and (k)(11) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Secretary of Veterans Affairs should comply with sections 15(k)(3), (k)(8), and (k)(11) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Administrator of EPA should comply with section 15(k)(15) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Administrator of NASA should comply with section 15(k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Director of OPM should comply with sections 15(k)(2), (k)(8), and (k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Commissioner of SSA should comply with sections 15(k)(2), (k)(3), (k)(6), (k)(8), (k)(11), and (k)(15) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. The Administrator of USAID should comply with sections 15(k)(15) and (k)(17) or report to Congress on why the agency has not complied, including seeking any statutory flexibilities or exceptions believed appropriate. As SBA continues to enhance the SBPAC peer review process, the SBA Administrator in her capacity as head of SBPAC should include more detailed guidelines than those used for the current process to facilitate a more in-depth review of agencies’ compliance with section 15(k) requirements. We provided a draft of this report for comment to the 24 agencies with OSDBU directors in our review as well as SBA. Four agencies that demonstrated compliance with section 15(k) requirements—the Departments of Homeland Security, Justice, and Transportation and the General Services Administration—indicated that they did not have comments. In addition, USDA—which did not demonstrate compliance with three section 15(k) requirements—responded that it did not have comments. We received comments from DOD on behalf of all 5 DOD agencies in our review (Air Force, Army, Navy, DLA, and the Office of the Secretary). Air Force demonstrated compliance with the requirements, and DOD did not comment on our findings for Air Force. DOD partially agreed with our recommendation to DLA and did not agree with our recommendations to Army, Navy, and the Office of the Secretary. Of the 15 non-DOD agencies to which we made recommendations and which provided comments, 5 agreed, 4 partially agreed, 1 agreed in principle, and 5 neither agreed nor disagreed with our recommendations. The agencies’ comments and our responses are summarized below. Unless otherwise noted, these agencies provided comment letters that are reproduced in appendixes XXVIII-XXXXI. SBA and Commerce also provided technical comments that we have incorporated, as appropriate. Commerce agreed with four of five parts of our recommendation relating to sections 15(k)(2), compensation/seniority; 15(k)(8), assign small business technical advisers; 15(k)(11), advise on in-sourcing; and 15(k)(17), respond to a notification of an undue restriction on the ability of a small business to compete. Commerce noted that, in response, it intends to change the position of the OSDBU director to an SES position in fiscal year 2018; the OSDBU director is in the process of appointing small business technical advisers; and the agency has been updating the Commerce Acquisition Manual to address procedures for in-sourcing and unduly restrictive solicitations. However, Commerce disagreed with one part of our recommendation for section 15(k)(5), identify and address bundling of contract requirements. The agency stated that small business set-asides valued over $150,000 are subjected to a review and approval process that includes the bureau small business specialist, procurement center representative, OSDBU director, and sometimes the senior procurement executive. Commerce also said that when the review package does not indicate a bundling action, the small business specialist and OSDBU may investigate the possibility of bundling based on supporting documentation submitted with the review form. Commerce had not previously provided this information at the time of our review. Based on this new information, we are no longer including this part of our recommendation and have made the relevant changes in the report. The department’s comments are reprinted in appendix XXVIII. DOD agreed with three of four parts of our recommendation to DLA relating to sections 15(k)(2), compensation/seniority; 15(k)(11), advise on in-sourcing; and 15(k)(17), respond to a notification of an undue restriction on the ability of a small business to compete. It noted that DLA will continue to submit requests to elevate the OSDBU director to an SES position; future in-sourcing actions will be coordinated with the OSDBU as required; and, when notified by a small business of undue restriction on competition, the OSDBU will forward a copy of the notification to the DLA competition advocate as required. However, DOD disagreed with the part of the recommendation for section 15(k)(7), supervisory duties. The agency stated that the headquarters DLA OSDBU director supervises all employees in the headquarters OSDBU. The agency also stated that DLA is a relatively large agency comprising many subordinate field activities, each of which has a small business office, and a director in each of those offices supervises all the small business professionals within that activity. As noted in the report, a DLA OSDBU official explained that the ODSBU director appointed small business associates to work in the field, but did not directly supervise field staff. Section 15(k)(7) requires that the OSDBU director have supervisory authority over agency personnel to the extent that the responsibilities of such personnel are related to the functions and duties implemented and executed by the OSDBU. We maintain our recommendation. DOD disagreed with our recommendation relating to section 15(k)(8), assign small business technical advisers, which we made separately to three agencies—the Department of the Army, Office of the Secretary of Defense, and Department of the Navy. It noted that the Defense Federal Acquisition Regulation Supplement delegates the authority to appoint small business technical advisers to the head of the contracting activity. As noted in the report, a DOD official stated that the law and regulation disagree on this requirement. However, when a statutory provision such as section 15(k) and regulations such as the acquisition regulation conflict, the statute controls. We maintain our recommendation. DOD also disagreed with our recommendation to the Office of the Secretary of Defense relating to section 15(k)(5), identify and address bundling of contract requirements. It noted that no contracting or bundling occurs at the level of the Office of the Secretary of Defense. DOD stated that contracting and bundling occurs in acquisitions conducted at the lower-level components of DOD. As noted in the report, the OSDBU director has an oversight role in relation to identifying proposed bundling, rather than an implementation role. However, section 15(k)(5) requires OSDBUs to identify proposed solicitations that involve significant bundling of contract requirements and, where appropriate, work with agency officials to mitigate the effects on small businesses. If DOD believes that the unique situation of this office warrants its demonstrated noncompliance with this provision, the agency should explain its demonstrated noncompliance to Congress and provide support for the agency’s views. Absent this, we maintain our recommendation. The department’s comments are reprinted in appendix XXIX. Education disagreed with our determination that the agency did not demonstrate compliance with section 15(k)(3), which requires the OSDBU director to report to the head of the agency or deputy head. Education stated that, as we reported, the former deputy secretary delegated the responsibility for the OSDBU director’s performance appraisal to the senior policy adviser. The agency also stated that its performance appraisals are done at two levels: the initial appraisal and the approval by a higher-level official. Education said that, in the case of the OSDBU director, the deputy secretary was the second level of approval. Education stated that, since January 2017, the position of deputy secretary has been vacant, and, as noted in the report, the director’s performance appraisal was signed by the senior policy adviser. However, for the two performance appraisals we reviewed, neither the agency head nor the deputy head signed these appraisals as required by section 15(k)(3). We maintain our recommendation. Education did not explicitly agree or disagree with our recommendation on section 15(k)(11), advise on in-sourcing. Education said that, due to limited OSDBU resources, the agency delegated the responsibility to review in-sourcing to another office. Education stated that, given anticipated budget reductions, the agency would evaluate how best to implement section 15(k)(11). As noted in the draft report, section 15(k)(11) requires that the person heading the OSDBU office must review and advise the agency on any decision to convert an activity performed by a small business to an activity performed by a federal employee. Therefore, we maintain our recommendation. The department’s comments are reprinted in appendix XXX. Energy agreed with our recommendation relating to sections 15(k)(3), reporting requirement (head of the agency or deputy head); 15(k)(8), assign small business technical advisers; and 15(k)(17), respond to a notification of an undue restriction on the ability of a small business to compete. The agency stated that the OSDBU director’s performance appraisals will be completed by the secretary or deputy secretary; the OSDBU will appoint at least one small business technical adviser for each business line (three in total); and the OSDBU will complete a data call to its small business program managers to determine if there have been any undue restrictions on small business. We note that completing a data call to retrospectively look at prior instances of undue restrictions will not address the section 15(k)(17) requirement, which requires the OSDBU director to respond to concerns of undue restriction on an ongoing basis. The agency estimated that the actions will be completed by September 30, 2017. The department’s comments are reprinted in appendix XXXI. HUD did not state whether it agreed or disagreed with our recommendation. In an e-mail, the senior small business utilization specialist in HUD’s OSDBU stated that the department did not have additional comments on the draft report. The official noted the two deficiencies we cited (the director’s experience and the OSDBU’s involvement with in-sourcing decisions) and reiterated the OSDBU director’s statement that her previous experience prepared her well for the OSDBU director position. The official further stated that the OSDBU director had discussed developing policy for OSDBU involvement with in-sourcing decisions. As we reported, section 15(k) lists specific prior experiences which the OSDBU director did not have. We maintain our recommendations. Interior agreed with our two-part recommendation related to sections 15(k)(11), advise on in-sourcing, and 15(k)(17), respond to a notification of an undue restriction on the ability of a small business to compete. The agency stated that, for section 15(k)(11), its OSDBU will implement procedures to involve the OSDBU director in in- sourcing decisions that affect small business concerns, and for section 15(k)(17), the OSDBU will implement procedures for responding to notifications of undue restrictions on the ability of small businesses to compete. The department’s comments are reprinted in appendix XXXII. Labor stated that it neither agreed nor disagreed with our two-part recommendation relating to sections 15(k)(2), compensation/seniority of the OSDBU director, and 15(k)(15), collateral duties of the OSDBU director. The agency noted that it is committed to reviewing its compliance with the relevant statutes. We maintain our recommendation to Labor, based on the agency not demonstrating compliance with sections 15(k)(2) and 15(k)(15). The department’s comments are reprinted in appendix XXXIII. State agreed with the part of the recommendation on section 15(k)(17), respond to a notification of an undue restriction on the ability of a small business to compete, and disagreed with the part of the recommendation on section 15(k)(8), assign small business technical advisers. For section 15(k)(17), State said that its OSDBU will affirm internal policy to refer all claims of unduly restricting the ability of a small business to compete, regardless of their resolution at a lower level, to the agency’s competition advocate. This is consistent with what we recommended. For section 15(k)(8), the agency asserted that it is currently in compliance, noting that it assigns small business technical advisers at the department level and that staffing each of its 46 bureaus with a full-time qualified small business technical adviser would be impractical, inefficient, and unnecessary. As noted in our report, State’s OSDBU director assigns a small business technical adviser as a full-time employee of the OSDBU, rather than of the procuring activity, which is not consistent with the section 15(k)(8) requirement. We maintain our recommendation to State regarding section 15(k)(8). The department’s comments are reprinted in appendix XXXIV. Treasury did not agree or disagree with our two-part recommendation related to sections 15(k)(8), assign small business technical advisers, and 15(k)(11), advise on in-sourcing. In an attachment to an e-mail, Treasury stated that the authority to appoint a small business technical adviser (termed small business specialist at Treasury) was delegated to the chief procurement officer for a bureau. As noted in the report, an agency official stated that the OSDBU does not assign a small business specialist to each of its bureaus. Section 15(k)(8) requires the OSDBU director to assign a small business technical adviser to each office to which SBA has assigned a procurement center representative. We maintain our recommendation. Treasury also stated that, if conversions from private to federal performance occurred, the department’s human resources office would coordinate this action with the OSDBU and the acquisition office. The department plans to formally incorporate the small business provision in its workforce planning guidance and develop and document procedures for in-sourcing review as part of the OSDBU’s effort to develop a Human Capital Workforce Planning process. VA agreed with the part of our recommendation related to section 15(k)(11), advise on in-sourcing, and concurred in principle with the other two parts of the recommendation related to sections 15(k)(3), reporting requirement (head of the agency or deputy head), and 15(k)(8), assign small business technical advisers. For section 15(k)(11), VA stated that the agency’s OSDBU has drafted language for its review policy for procurements to address this requirement. VA plans to implement the revised policy in fiscal year 2018. For section 15(k)(3), VA asserted that it is in compliance with this requirement but acknowledged that its chief of staff is the rating official for the OSDBU director. VA further noted that the deputy secretary is the reviewing official and the secretary the appointing official. VA said that federal law, regulation, and the VA handbook on the performance appraisal system require this separation of duties and roles. VA also said that, while the chief of staff prepares and signs the initial summary rating and performance appraisal, the appraisal is subject to review by the secretary and deputy secretary. VA stated that removing the chief of staff from the performance appraisal process would require merging some of the aforementioned duties into the same person, eliminating the independent reviews required by law and regulation. However, as we note in appendix II, section 15(k)(3) requires that the OSDBU director report exclusively to the agency’s secretary or deputy secretary, including with respect to performance appraisals. Therefore, we maintain our recommendation. VA also said that it will report to Congress on the reasons for its current reporting structure for the OSDBU director, with a target completion date of September 30, 2017. For the part of the recommendation related to section 15(k)(8), VA acknowledged the value of contract activities having a knowledgeable on-site small business technical adviser who is able to collaborate with the procurement center representative and noted that, to the extent that the adviser addresses matters within the OSDBU’s responsibility, it is essential for the OSDBU to provide guidance and direction. However, the agency stated that the requirement for the OSDBU to select an employee of the contracting activity and direct that person’s principal work efforts to assist the procurement center representative requires an unusual degree of matrixed reporting relationships and will entail a high level of collaboration with the contracting activity’s leadership. VA said that its OSDBU will seek to collaborate with the cognizant contracting activities through VA’s Senior Procurement Council and develop a memorandum of understanding outlining roles and responsibilities. VA stated that its target completion date for the memorandum is September 30, 2017 (so as to go into effect at the start of fiscal year 2018). We maintain our recommendation, as VA’s comments do not make it clear if the OSDBU director will assign a small business technical adviser to the procuring activity or if the assigned staff would be a full-time employee of this activity. The department’s comments are reprinted in appendix XXXVI. EPA did not say whether it agreed or disagreed with our recommendation relating to requirements for section 15(k)(15), collateral duties. As we noted in the report, EPA’s OSDBU director oversees two EPA-wide programs, the Disadvantaged Business Enterprise Program and the Asbestos and Small Business Ombudsman Program. EPA stated that we mischaracterized provisions of the Clean Air Act of 1990 (as requiring EPA’s OSDBU director to serve as the ombudsman for the Asbestos and Small Business Ombudsman Program) as part of our determination of whether EPA demonstrated compliance. EPA stated that, instead, the act requires that the relevant programs be monitored through the ombudsman (not the OSDBU director). The agency stated that EPA appointed an official other than the OSDBU director to serve as the ombudsman and that our report should be revised to correctly indicate that the OSDBU director does not hold the ombudsman position. Based on EPA’s comments, we removed references in the report to the Clean Air Act requirements being inconsistent with section 15(k)(15) requirements. We also made it clear that the OSDBU, through the program ombudsman, monitors the activities of the Asbestos and Small Business Ombudsman Program. As noted in the report, we did not consider the provisions of the Clean Air Act and the corresponding responsibilities of the OSDBU as a factor for our assessment of demonstrated compliance with section 15(k)(15). EPA also stated that the Disadvantaged Business Enterprise Program is structured so that the OSDBU director does not serve as the program manager or carry out the day-to-day programmatic responsibilities in contravention of section 15(k)(15). However, as we noted in the report, the OSDBU director oversees the Disadvantaged Business Enterprise Program. This is inconsistent with section 15(k)(15), which requires that the OSDBU director not hold responsibilities except as necessary to carry out responsibilities under section 15(k). Thus, we maintain our recommendation to EPA. The agency’s comments are reprinted in appendix XXXVII. NASA partially agreed with our recommendation related to section 15(k)(17), respond to a notification of an undue restriction on the ability of a small business to compete. The agency agreed that, as our report supports, it is currently in compliance with two required steps under section 15(k)(17), subparagraphs (A) and (C), but not with the third required step, subparagraph (B), which requires the OSDBU director to inform the agency’s advocate for competition when notified by a small business of a solicitation that unduly restricts its ability to complete. But NASA added that it believes that the most practical and effective way to address such notifications is for the OSDBU, in consultation with the contracting officer, to resolve issues at the lowest level possible. However, to comply with the statute, the agency said that the OSDBU will begin notifying the advocate for competition. The agency also said that the OSDBU will notify the cognizant Center- level competition advocate. NASA said that the OSDBU, in coordination with the agency’s Office of Procurement, intends to issue formal correspondence to the acquisition community on the new procedures within 6 months and that it plans to begin carrying out the new procedures the next time it receives a notification of an unduly restrictive solicitation. The agency’s comments are reprinted in appendix XXXVIII. OPM agreed with one part of our recommendation and partially agreed with two parts of the recommendation. OPM concurred with the part of the recommendation regarding section 15(k)(17), respond to a notification of an undue restriction on the ability of a small business to compete. In response, OPM said it prepared draft guidance (standard operating procedures) on the topic that would address the issue of communicating such notices to the agency’s advocate for competition. OPM said that it is currently reviewing the guidance. OPM partially agreed with the part of the recommendation regarding section 15(k)(2), compensation/seniority of the OSDBU director. The agency stated that, at the time it became a requirement for the OSDBU director to be a member of the SES, the OSDBU director held a General Schedule position (GS-15). OPM said that the current nominee for OPM director, if confirmed, will evaluate and take appropriate action to comply or report to Congress on why the agency has not complied, including, if appropriate, seeking statutory flexibility or an exception. OPM also partially concurred with the part of our recommendation related to section 15(k)(8), assign small business technical advisers. OPM stated that we did not take into account that the OSDBU has two staff members qualified to work with the procurement center representative who each spend 50 percent of their time working with the procurement center representative, which equates to full-time coverage. We maintain our recommendation as section 15(k)(8) requires that the technical adviser must be a full-time employee of the procuring activity. The agency’s comments are reprinted in appendix XXXIX. SSA agreed with our recommendation relating to sections 15(k)(2), compensation/seniority; 15(k)(3), reporting requirement (head of the agency or deputy head); 15(k)(6), provide assistance on payments; 15(k)(8), assign small business technical advisers; 15(k)(11), advise on in-sourcing; and 15(k)(15), collateral duties. SSA summarized the actions it has taken or plans to take in response. For sections 15(k)(2) and (k)(3), SSA stated that, given the OSDBU director’s duties and responsibilities and the agency’s small size and structure, it intended to explore obtaining an exception to keep the director position at the GS-15 level and an exception to the reporting requirement. For section 15(k)(6), SSA said it will refer small businesses seeking assistance with payments to the OSDBU director. For section 15(k)(8), SSA said its OSDBU will officially assign a small business technical adviser to the relevant office. For section 15(k)(11), SSA noted an existing analysis it performs of contractor functions, which helps ensure that SSA takes appropriate steps to guard against improper reliance on contractors and that contractor personnel do not perform inherently governmental functions. SSA said any proposed in- sourcing based on the analysis would be discussed with the component, the Office of Acquisition and Grants, and the OSDBU if warranted. Finally, for section 15(k)(15), SSA said that it would delegate coordinating responsibilities for the Electronic Subcontracting Reporting System from the OSDBU director to its small business technical adviser. The agency’s comments are reprinted in appendix XXXX. USAID agreed with our recommendation relating to section 15(k)(15), collateral duties, and 15(k)(17), respond to a notification of an undue restriction on the ability of a small business to compete. For the part of the recommendation relating to section 15(k)(15), collateral duties, USAID said it would not gain efficiencies by moving responsibility for the Minority Serving Institution (MSI) program from the OSDBU. Instead, it would explore requesting statutory flexibility or an exception to allow the OSDBU director to continue to advocate for the MSI. In response to the other part of the recommendation relating to section 15(k)(17), USAID said the OSDBU director simultaneously will notify the advocate for competition, contracting officer, and ombudsman in instances in which a notice falls within the parameters of section 15(k)(17). The agency’s comments are reprinted in appendix XXXXI. In its comment letter, SBA agreed with our recommendation to include more detailed guidelines for the SBPAC peer review to facilitate a more in-depth review of agencies’ compliance with section 15(k) provisions. The agency said that it has begun to implement the recommendation for fiscal year 2017 and that it has been developing more detailed guidelines that provide more objective criteria than the current guidelines, such as indicating whether agencies comply with the 21 section 15(k) requirements. The agency also stated that the new peer review process will count for a higher percentage of each agency’s overall scorecard grade (an increase from 10 percent to 20 percent). SBA said that the changes will be implemented for the fiscal year 2017 peer review. The agency’s comments are reprinted in appendix XXXV. SBA provided additional comments, which the agency identified as technical comments, in an e-mail from the program manager, GAO liaison, Office of Congressional and Legislative Affairs. While these comments did not address our recommendation to SBA, in some instances they appeared to question our approach and the findings that provided a basis for our conclusions and recommendation. In particular, the comments stated that there is little value in a comparison of SBA’s success factor peer review (we refer to this in the report as the SBPAC peer review) with the requirement for SBA to conduct a full peer review of all of the requirements in section 15(k), given that the success factors were developed before the 2013 statutory requirement was put in place. In addition, the comments questioned the relevance of our findings for agency compliance with its peer review process. However, as we state in the report, the SBPAC peer review assesses compliance with certain section 15(k) requirements, particularly the “organization” success factor focusing on five section 15(k) provisions. Our analysis focuses primarily on the methods SBA used and the guidance it provided for assessing OSDBUs’ compliance with these section 15(k) requirements. We also note that the documentation SBA provided on its plans for the revised process suggests that similar methods to assess compliance will be used in the new process as under the current process. As in the current process, the documentation indicates that compliance determinations will be made based on a review of documents agencies voluntarily submit, rather than on a more in-depth assessment. For these reasons, we maintain that our discussion of the success factor peer review is relevant when considering how SBA may implement the new review process. In addition, our analysis does not equate our findings for agency compliance with those of the success factor peer review. Rather, it examines the alignment of the results. This allows for a valid assessment of whether the scores generally correspond with our findings. Our recommendation is intended to help ensure that SBA implements a more robust approach to assessing section 15(k) compliance through the SBPAC peer review, as compared to the success factor peer review. SBA stated that our report will help inform the structure of the new peer review checklist being developed. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman and Ranking Member of the House Committee on Small Business, and other interested congressional committees. In addition, the report will be / www.gao.gov. available at no charge on the GAO website at http:/ If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or by e-mail at shearw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. K ey contributors to this report are listed in appendix XII. We reviewed practices of the Offices of Small and Disadvantaged Business Utilization (OSDBU) at 24 agencies for carrying out the requirements of the Small Business Act. More specifically, we examined (1) the extent to which selected federal agencies with procurement powers demonstrated compliance with five requirements of section 15(k) relating to the OSDBU director (including reporting relationships, qualifications, and supervisory duties); (2) the extent to which selected federal agencies demonstrated compliance with eight section 15(k) requirements for carrying out selected OSDBU functions or activities; and (3) the Small Business Procurement Advisory Council review of OSDBU compliance with section 15(k) requirements. To determine which federal agencies to include in our review, we reviewed fiscal year 2015 data from the Federal Procurement Data System-Next Generation. Government agencies are responsible for collecting and reporting data on federal procurements through this data system. (These were the most recent data available at the time of our review.) Using these data, we selected 23 agencies that each procured more than $900 million in goods and services in fiscal year 2015, accounting for 87 percent of all federal contracting obligations. Among the 23 agencies, 4 agencies were within the Department of Defense (DOD)—the Departments of the Air Force, Army, and Navy and the Defense Logistics Agency (DLA). Together, these 4 DOD components were responsible for 88 percent of DOD’s contracting obligations. We also selected 1 additional agency, the DOD Office of the Secretary, due to its role as a policy office within DOD. Thus, we selected 24 agencies in total. The 24 agencies in our review are listed below. The 10 agencies shown in italics were assessed for the section 15(k)(3) requirement about the OSDBU director reporting to the head or deputy head of the agency. 1. Defense Logistics Agency 2. Department of Agriculture 3. Department of the Air Force 4. Department of the Army 5. Department of Commerce 6. Department of Defense – Office of the Secretary 7. Department of Education 8. Department of Energy 9. Department of Homeland Security 10. Department of Housing and Urban Development 11. Department of the Interior 12. Department of Justice 13. Department of Labor 14. Department of the Navy 15. Department of State 16. Department of Transportation 17. Department of the Treasury 18. Department of Veterans Affairs 19. Environmental Protection Agency 20. General Services Administration 21. National Aeronautics and Space Administration 22. Office of Personnel Management 23. Social Security Administration 24. U.S. Agency for International Development See appendixes III–XXVII for our determinations of overall demonstrated compliance with section 15(k) requirements and for our determinations of demonstrated compliance at each agency. We selected the following 14 requirements of section 15(k) for our review, but focused only on 13 in our discussion of individual agencies’ demonstrated compliance. As discussed in this report, we evaluated demonstrated compliance with the section 15(k)(16) requirement for agencies to submit an annual training and travel report to Congress. During the course of our review, the Small Business Administration (SBA) established new procedures to submit a consolidated report to Congress, rather than having each agency submit an individual report. Due to the new procedures, we do not include this requirement in the summary tables or agency appendixes. See appendix II for more information on the requirements. 15(k): Director experience 15(k)(2): Compensation/seniority 15(k)(3): Reporting requirement (head of agency or deputy head) 15(k)(5): Identify and address bundling of contract requirements 15(k)(6): Provide assistance on payments 15(k)(7): Supervisory duties 15(k)(8): Assign small business technical advisers 15(k)(11): Advise on in-sourcing 15(k)(12): Provide advice to chief acquisition officer and senior 15(k)(13): Provide training 15(k)(14): Receive unsolicited proposals and forward them when 15(k)(15): Collateral duties 15(k)(16): Submit training reports to Congress 15(k)(17): Respond to notification of an undue restriction on ability of small business to compete We focused our review on whether agencies had demonstrated compliance with each of these requirements. While we could not determine the compliance status with certainty, our approach allowed for a sufficiently reliable measure of demonstrated compliance, since it relied on self-reported accounts of compliance through interviews and survey responses, documentary evidence of compliance through agency materials and documents, or both. Specifically, categorizing an agency as demonstrating compliance with a section 15(k) requirement required evidence of compliance in our review of documents, interview materials, and/or questionnaire responses. In cases in which supporting documentation was not available, we made the determination based solely on the agency’s survey response and/or follow-up with agency officials. To assess whether the OSDBU director reports directly to the agency head or the deputy head, as generally required by section 15(k)(3) of the Small Business Act, we focused on 6 agencies with major contracting activity (greater than $10 billion in obligations) and 4 agencies with contracting activity under $10 billion. These 10 agencies were the Departments of Education, Energy, Labor, State, Air Force, Army, Navy, and Veterans Affairs; the National Aeronautics and Space Administration; and the Social Security Administration. We considered agencies to demonstrate compliance if the designated OSDBU directors exercised the OSDBU responsibilities, if they reported directly to and were responsible only to the agency head or the agency head’s deputy, and if these officials signed the director’s performance appraisals. To determine compliance, we reviewed organization charts to identify where the OSDBU was situated in relation to the agency head or deputy head; OSDBU directors’ performance appraisals for the previous 2 years to identify the agency official(s) who evaluated the OSDBU director’s performance; the position description of the OSDBU director to identify the OSDBU director’s supervisor; and other agency documents, such as reports and memoranda, discussing the agency’s small business programs. We also interviewed the designated OSDBU directors at each agency to identify the official(s) to whom they had reported during the past year and asked them to provide information characterizing the reporting relationship, such as the extent to which small business issues were discussed. In addition, we reviewed and analyzed section 15(k)(3). We surveyed OSDBU directors at the 24 agencies about the other section 15(k) requirements relating to OSDBU directors (such as rank and responsibilities) and about OSDBU functions. We reviewed available documents, such as policy statements issued by agency leadership on OSDBU practices or small business efforts, small business manuals or operating plans, and guidance and reports, when available. We also interviewed the designated OSDBU directors and other officials at each agency to discuss the extent to which they carry out each of the requirements. To obtain information on the functions performed by OSDBUs and actions the offices took to further small business contracting opportunities, we surveyed the OSDBU directors at 24 federal agencies using a web-based survey. The survey asked the OSDBU directors about their roles and functions. In this survey, we focused on seven areas: acquisition planning, solicitation development, proposal evaluation, obtaining payments, training, interaction with SBA, and other functions. The survey questions covered certain OSDBU functions listed in section 15(k) of the Small Business Act. To obtain comparable data with the 2011 survey of OSDBU directors, our survey instrument listed similar questions and response choices as the 2011 survey. Updates to the 2011 survey included adding some new questions, reordering a few questions, and deleting several questions that were no longer relevant. We obtained input from GAO experts on survey design. We also pretested the survey instrument with two OSDBU directors to help ensure that the questions would be correctly interpreted by respondents. Agency officials, including the OSDBU directors, were notified about the survey before it was launched on November 1, 2016. The survey closed on February 24, 2017. We had a 100 percent response rate. We conducted follow-up with OSDBU directors to clarify their responses and to obtain additional information in instances in which they indicated they did not perform a section 15(k) requirement. The purpose of the follow-up was to determine which office, if not the OSDBU, carried out these functions at their agency, to collect answers from OSDBU directors who did not provide them initially, or to determine why the OSDBU did not carry out a specific function. To do this, we conducted interviews with OSDBU directors. A few agencies also provided written responses to our follow-up questions. We reviewed documentation and data related to SBA’s scorecard for small business procurement and the peer review process of the Small Business Procurement Advisory Council (SBPAC) and spoke with SBA officials about this program. We compared SBA’s “OSDBU organization” success factor scores to the compliance information we obtained from our review to determine whether they correlated with our compliance determinations. SBPAC is an interagency council chaired by SBA, and its members are mainly OSDBU directors. SBPAC annually reviews each OSDBU to determine compliance with certain OSDBU functions pertaining to section 15(k). These reviews are used to help determine SBA’s annual scorecard grade for each agency. We conducted our work from May 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Section 15(k) of the Small Business Act, as amended, requires each Office of Small and Disadvantaged Business Utilization (OSDBU) and each OSDBU director to meet certain requirements. Our review selected 14 requirements from section 15(k) for closer review. This appendix details each of those requirements and, where appropriate, elaborates on how we determined demonstrated compliance with each section. Listed below are the section 15(k) requirements we assessed to determine the extent to which the 24 agencies in this review demonstrated compliance. Appendix I Compliance with Select Section 15 Requirements : Overall Agencies’ Demonstrated ) We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Defense Logistics Agency (DLA) demonstrated compliance with 8 of the 12 section 15(k) requirements within our review (see table 5). The agency did not demonstrate compliance with 4 requirements regarding compensation/seniority of the OSDBU director, supervisory duties of the director, providing advice on the conversion of activities from performance by a small business to performance by a federal employee (in-sourcing), and responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(2), compensation/seniority: The survey response and the position description indicated that the director held a General Schedule position (GS-15). According to the survey, the chief acquisition officer and senior procurement executive are Senior Executive Service positions. In a follow-up meeting, agency officials stated that the agency has requested that the Department of Defense seek Congressional approval to authorize a new Senior Executive Service position for the OSDBU director. The officials stated that the agency has been waiting for authorization to make this change. 15(k)(7), supervisory duties: An agency official stated that the office provides policy and program oversight. The official explained that the director appoints small business associates to work in the field, but does not directly supervise field staff. Field staff report to deputy commanders at their sites. 15(k)(11), advise on in-sourcing: An agency official stated that the director does not review decisions on the conversion of activities from performance by a small business to performance by a federal employee. Instead, the agency’s human resource office performs that function in consultation with the acquisition office. The official also stated that the director does not view in-sourcing as negatively affecting small businesses but focuses on how to align the resources to best fulfill the assigned mission. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: An agency official explained that the director works to ensure such notifications are resolved as quickly as possible, which requires working with key staff at the operational level in the field. The official further stated that the director did not think that there is a need to notify the agency advocate for competition unless there is a need to change an agency practice. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the U.S. Department of Agriculture (USDA) demonstrated compliance with 9 of the 12 section 15(k) requirements within our review (see table 6). The department did not demonstrate compliance with 3 requirements regarding compensation/seniority of the OSDBU director, collateral duties of the director, and responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(2), compensation/seniority: At the time of our review, the current OSDBU director was an acting director. In cases in which an agency’s OSDBU director was an acting director, we assessed compliance for section 15(k)(2) based on the seniority and compensation of the immediately prior permanent director. OSDBU officials explained that historically, the permanent OSDBU director was a political appointee holding a Senior Executive Service (SES) position. However, the prior director was a political appointee holding a General Schedule position (GS-15 level). The officials explained that the position was temporary (6 months) and it would have been difficult to fill the position with a member of the SES on a short-term basis. 15(k)(15), collateral duties: The current OSDBU director holds the position in an acting capacity because there has not been a new political appointee, and he also holds the title of acting assistant secretary for administration. USDA officials stated that they did not know when a new OSDBU director would be appointed, but they expected that person to exclusively hold the OSDBU director position. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: OSDBU officials told us that the OSDBU would work to resolve such an issue at the lowest level. The OSDBU director and the small business technical adviser would work on the issue with the contracting office to give recommendations. The agency advocate for competition would only be notified if the issue could not be resolved at a lower level. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of the Air Force demonstrated compliance with all 13 of the 13 section 15(k) requirements within our review (see table 7). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of the Army (Army) demonstrated compliance with 12 of the 13 section 15(k) requirements within our review (see table 8). The department did not demonstrate compliance with 1 requirement regarding assigning small business technical advisers. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(8), assign small business technical advisers: An agency official stated that Army personnel providing small business technical advice are assigned to their position by the procuring activity offices and not by the OSDBU. These personnel possess technical knowledge of the procuring activity and provide technical advice to the procurement center representatives on contracting matters. The official also stated that for issues involving particularly complex technical areas, the OSDBU will form a team with the appropriate staff to provide advice to the procurement center representative. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Commerce (Commerce) demonstrated compliance with 8 of the 12 section 15(k) requirements within our review (see table 9). The department did not demonstrate compliance with 4 requirements regarding compensation/seniority of the OSDBU director, assigning small business technical advisers, providing advice on the conversion of activities from performance by a small business to performance by a federal employee (in-sourcing), and responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(2), compensation/seniority: The survey response indicated that the OSDBU director held a General Schedule position (GS-15 level) and the chief acquisition officer and senior procurement executive were Senior Executive Service (SES) positions. In a written response, the agency stated that it has begun discussions to elevate the OSDBU director position to an SES level. 15(k)(8), assign small business technical advisers: A policy document provided by the agency provides evidence of an OSDBU process to assign small business technical advisers, however, agency officials told us that the head of each bureau procurement office rather than the OSDBU director is the official who appoints technical advisers. 15(k)(11), advise on in-sourcing: The survey response indicated that providing advice on in-sourcing was not an OSDBU role. In a written response, the agency stated that it has been developing a review and advisory process on in-sourcing decisions. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: The survey response indicated that the OSDBU had not received any such notifications in the past three years. According to its written response, the agency stated that it has been developing an agency policy that would include procedures for addressing notifications by small businesses concerning solicitations that have been issued. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Defense (DOD) – Office of the Secretary demonstrated compliance with 10 of the 12 section 15(k) requirements within our review (see table 10). DOD – Office of the Secretary did not demonstrate compliance with 2 requirements regarding identifying and addressing significant bundling of contract requirements and assigning small business technical advisers. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(5), identify and address bundling of contract requirements: According to the survey response and policy documentation, the OSDBU provides more of an oversight role than a direct implementation role. In an interview, OSDBU officials said that identification and mitigation of bundling typically occurs at the local level of the contracting office. In instances in which a small business notified the OSDBU that bundling had occurred, the OSDBU would report it to the contracting office. The OSDBU also oversees bundling activity; for example, checking that bundled contracts are coded correctly. 15(k)(8), assign small business technical advisers: The survey response indicated that the OSDBU director has not assigned small business technical advisers to each office in which the Small Business Administration has a procurement center representative. In a follow-up meeting, an agency official explained that the OSDBU does not have the resources to assign small business technical advisers. The contracting officer helps determine the need for a small business technical adviser on a case-by-case basis. According to OSDBU officials, the Department of Defense has about 700 small business professionals (generally known at other agencies as small business technical advisers). The small business professionals coordinate their work with the contracting office, but these staff reside in the small business office. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Education (Education) demonstrated compliance with 11 of the 13 section 15(k) requirements within our review (see table 11). The department did not demonstrate compliance with 2 requirements regarding reporting to the head or deputy head of the agency and providing advice on the conversion of activities from performance by a small business to performance by a federal employee (in-sourcing). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(3), reporting requirement (head of agency or deputy head): Information provided by Education officials indicated that the OSDBU director reports to the senior policy adviser (who is also the rating official). An agency official explained that the previous deputy secretary delegated the duties and functions to the senior policy adviser. The director’s performance appraisal was signed by the senior policy adviser. According to the official, in the past, the director has typically met with the deputy secretary on a monthly basis and provided updates on small business activities. At the time GAO completed its review, Education had no deputy secretary. 15(k)(11), advise on in-sourcing: According to the survey and follow-up response, advising on in-sourcing is not a function of the OSDBU and an OSDBU official could not identify any instances in which the OSDBU would be involved in this activity. An OSDBU official explained that this responsibility was delegated to the Office of Contract Operations because of limited OSDBU resources. The official stated that the agency understands that this requirement must be fulfilled by the OSDBU director and it has been developing a policy and procedures to address this responsibility. An OSDBU official said that the agency’s Office of General Counsel would have to review and concur with the new policy and procedures. The agency’s goal is to have a new policy approved for fiscal year 2018. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Energy (Energy) demonstrated compliance with 10 of the 13 section 15(k) requirements within our review (see table 12). The department did not demonstrate compliance with 3 requirements regarding reporting to the head or deputy head of the agency, assigning small business technical advisers, and responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. appointed within the OSDBU. The official stated that the small business technical advisers do not report directly to the contracting office but are subject matter experts within the OSDBU who work with specific business lines such as science and energy. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: An agency official stated that on receiving a complaint of this nature, the director would investigate the situation and, if needed, elevate the complaint along the contracting chain of command. For instance, the progression would be to reach out to the small business, then to the agency’s contracting office to obtain additional perspective, and if needed, to notify the agency advocate for competition. The official also said that the director would share information with the small business about available resources. The official stated that the director might not carry out all of these steps if the situation was resolved earlier in the process. The official said that the current OSDBU director had held the position of OSDBU director since January 2017 and had not yet encountered this situation. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Homeland Security demonstrated compliance with all 12 of the 12 section 15(k) requirements within our review (see table 13). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Housing and Urban Development (HUD) demonstrated compliance with 10 of the 12 section 15(k) requirements within our review (see table 14). The department did not demonstrate compliance with 2 requirements regarding the prior experience of the OSDBU director and providing advice on the conversion of activities from performance by a small business to performance by a federal employee (in-sourcing). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. has had a long work history in a variety of jobs and felt that the director was prepared for the role of the OSDBU director. 15(k)(11), advise on in-sourcing: An OSDBU official told us that the OSDBU does not receive notices of in-sourcing proposals. According to the agency’s survey response, the Office of the Chief Procurement Officer would review and advise the agency on decisions to convert an activity (to performance by a federal employee). An agency official stated that the OSDBU director plans to pursue discussions within the agency about a policy to address the OSDBU’s involvement with in-sourcing decisions. A written response provided by HUD officials also indicated that the agency plans to examine its policy and ensure compliance with this requirement. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of the Interior (Interior) demonstrated compliance with 10 of the 12 section 15(k) requirements within our review (see table 15). The department did not demonstrate compliance with 2 requirements regarding providing advice on the conversion of activities from performance by a small business to performance by a federal employee (in-sourcing) and responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(11), advise on in-sourcing: The survey response indicated that this was not an OSDBU role. In a follow-up meeting, an agency official stated that the OSDBU does not have a formal process for giving advice on in-sourcing and that the official could not recall the OSDBU being involved with any in-sourcing decisions. The official does not think that in- sourcing is happening very often but said that there may be instances of this that the official is not aware of. The official further stated that if a small business is affected, the OSDBU would be consulted when relevant issues arise. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: An agency official stated that the OSDBU director resolves these issues at lower levels and that this approach works at Interior. The official explained that each bureau at Interior has advocates for competition and a bureau chief, who is a senior expert on contracting for that bureau. The OSDBU director will reach out to the bureau advocates and chiefs to obtain information on a specific issue. The official considers this to be the best place to identify the details of the undue restriction. However, the official stated that there may be other benefits to informing the primary agency advocate for competition, such as attempting to see broad trends within the agency rather than remedying an individual situation. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Justice demonstrated compliance with all 12 of the 12 section 15(k) requirements within our review (see table 16). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Labor (Labor) demonstrated compliance with 11 of the 13 section 15(k) requirements within our review (see table 17). The department did not demonstrate compliance with 2 requirements regarding compensation/seniority and collateral duties of the OSDBU director. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(2), compensation/seniority: At the time of our review, the current OSDBU director was an acting director. In cases in which an agency’s OSDBU director was an acting director, we assessed compliance for section 15(k)(2) based on the seniority and compensation of the immediately prior permanent director. In follow-up correspondence, a Labor staff member indicated that the prior permanent OSDBU director was a presidentially appointed, Senate-confirmed position compensated under an executive schedule. The position is not a Senior Executive Service (SES) position. This type of appointment does not meet the statutorily defined SES position requirements. 15(k)(15), collateral duties: According to agency officials, the acting OSDBU director holds other positions and titles, including assistant secretary for administration and management and chief acquisition officer. When the position is permanently filled, the OSDBU director will hold the position of the assistant secretary for administration and management. The officials referenced a March 2010 department order, which explains that the agency realigned the small business-related functions under the assistant secretary for administration and management to better integrate small business outreach and small business procurement within the overall procurement function of the department. According to the officials, the assistant secretary for administration and management was appointed to simultaneously serve as the OSDBU director. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of the Navy (Navy) demonstrated compliance with 12 of the 13 section 15(k) requirements within our review (see table 18). The department did not demonstrate compliance with 1 requirement regarding assigning small business technical advisers. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(8), assign small business technical advisers: The survey response indicated that the OSDBU director does not assign small business technical advisers. In a follow-up discussion, an agency official stated that section 15(k) of the Small Business Act and the Defense Federal Acquisition Regulation (DFAR) differ in relation to how this function is to be carried out. The official explained that the DFAR delegates the responsibility of hiring technical advisers to the head of contracting. The official added that delegating this activity to the contracting office is effective and that the OSDBU is responsive to this office. The official also explained that the OSDBU is a policy-level office and does not have the staffing to oversee this activity. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of State (State) demonstrated compliance with 11 of the 13 section 15(k) requirements within our review (see table 19). The department did not demonstrate compliance with 2 requirements regarding assigning small business technical advisers and responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(8), assign small business technical advisers: The OSDBU director does not assign a small business technical adviser as a full-time employee of the procuring activity, but rather oversees small business technical advisers as employees of the OSDBU. There is 1 procurement center representative assigned to State, who covers all 46 bureaus of the department. When a question arises for the procurement center representative, the OSDBU director assigns a technical adviser to work (as needed) with the representative at the bureau procurement office. An agency official stated the director would like to assign small business specialists to each of the major bureaus at State, but resource constraints represent a significant barrier. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: Officials at State told us that the OSDBU interacts with a wide range of small businesses and that, in certain cases, the OSDBU would respond to a notification of an undue restriction on the ability of a small business to compete by completing the steps detailed in all three subsections of this requirement. But in other instances, the OSDBU would partially follow the subsections. For example, the OSDBU would not inform the agency’s advocate for competition when they believed it was not warranted to do so. The officials stated that in these cases there was no need for the OSDBU to inform the agency’s advocate for competition because the situation the small business raised was resolved at lower levels in the acquisition process. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Transportation demonstrated compliance with all 12 of the 12 section 15(k) requirements within our review (see table 20). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of the Treasury (Treasury) demonstrated compliance with 10 of the 12 section 15(k) requirements within our review (see table 21). The department did not demonstrate compliance with 2 requirements regarding assigning small business technical advisers and providing advice on the conversion of activities from performance by a small business to performance by a federal employee (in-sourcing). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(8), assign small business technical advisers: An agency official stated that the OSDBU does not assign a small business technical adviser (termed small business specialists at Treasury) to each of its bureaus. The official explained that it is important for bureau managers to assign technical advisers. The OSDBU provides input on the small business expertise the appointee holds. 15(k)(11), advise on in-sourcing: Based on the survey response and follow-up discussion, an agency official indicated that this was not an OSDBU role. The official added that this activity falls under the human resources area. However, the official was not certain if human resources personnel would consult with the OSDBU, as in-sourcing does not happen that often. The official also referenced an Office of Management and Budget letter saying that in these cases, the OSDBU should be notified, but this guidance had not been incorporated into Treasury’s policy. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Department of Veterans Affairs (VA) demonstrated compliance with 10 of the 13 section 15(k) requirements within our review (see table 22). The department did not demonstrate compliance with 3 requirements regarding reporting to the head or deputy head of the agency, assigning small business technical advisers, and providing advice on the conversion of activities from performance by a small business to performance by a federal employee (in-sourcing). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(3), reporting requirement (head of agency or deputy head): An agency official stated that the director reports to both the deputy secretary and the chief of staff, and the chief of staff signs the director’s performance appraisals. The official added that, if there is a matter requiring the attention of the secretary, the OSDBU director will first advise the chief of staff. The official did not know why the secretary and deputy secretary do not sign the director’s performance appraisals, but the official believes the director has adequate access to both the secretary and deputy secretary through the chief of staff. 15(k)(8), assign small business technical advisers: According to an agency official, the director does not assign OSDBU personnel to the procuring activity, and the official believes the procuring office should perform this role. VA has 46 small business liaison officers (the term VA uses for personnel performing the role of small business technical advisers). The official stated that the small business liaison officers are full-time employees and well-qualified, but that their principal duty is not to assist the procurement center representatives. 15(k)(11), advise on in-sourcing: An agency official stated that it is not common for agency personnel to send the OSDBU information when they are considering in-sourcing of activities. The official added that if a decision were made to convert sourcing, the agency would not submit the contract for re-competition. The official said that a draft policy (under development) will state that the OSDBU must be notified of potential in- sourcing. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Environmental Protection Agency (EPA) demonstrated compliance with 11 of the 12 section 15(k) requirements within our review (see table 23). The agency did not demonstrate compliance with 1 requirement regarding collateral duties of the OSDBU director. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. the intent of delegating these duties to the OSDBU was to increase cost efficiencies and effectiveness because of the functional overlap (all small business-related) and that sharing resources to accomplish these complementary agendas made sense for the agency. The OSDBU director provides administrative support to the procurement manager for the Disadvantaged Business Enterprise Program. The Clean Air Act of 1990 requires the OSDBU, through the program ombudsman, to monitor activities for the Asbestos and Small Business Ombudsman Program. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the General Services Administration demonstrated compliance with all 12 of the 12 section 15(k) requirements within our review (see table 24). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the National Aeronautics and Space Administration (NASA) demonstrated compliance with 12 of the 13 section 15(k) requirements within our review (see table 25). NASA did not demonstrate compliance with 1 requirement regarding responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: Officials at NASA stated that, on receiving a notification of an unduly restrictive solicitation, the OSDBU would respond by working directly with the appropriate contracting personnel at the relevant NASA buying center. The OSDBU would notify the contracting officer and ensure that the small business was aware of resources, but it would not inform the agency advocate for competition because the goal is to resolve issues at the lowest level. According to the officials, NASA has a decentralized structure consisting of 10 buying centers, and it is rare for an undue restriction issue to require the attention of the agency advocate for competition. Issues are generally resolved at the buying center offices. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Office of Personnel Management (OPM) demonstrated compliance with 9 of the 12 section 15(k) requirements within our review (see table 26). OPM did not demonstrate compliance with 3 requirements regarding compensation/seniority of the OSDBU director, assigning small business technical advisers, and responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. the OSDBU, and in general mirrors the director’s work. The official told us the staff member spends about 50 percent of her time working with the procurement center representative. The official also said the OPM OSDBU is small, and the director handles many tasks himself or may refer them to other relevant individuals. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: The survey response indicated that the OSDBU had not received a notification of an undue restriction in the past 3 years. In a follow-up response, an agency official indicated that the director would take most of the steps specified in the notification requirement, but would only inform the agency’s advocate for competition if the situation could not be resolved between contracting, the procurement center representative, and the OSDBU director. According to the official, the agency has created a draft standard operating procedure that will address the issue of communicating to the agency advocate for competition. However, the official also added that the draft review process is lengthy. The review process had not been completed as of May 19, 2017. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the Social Security Administration (SSA) demonstrated compliance with 7 of the 13 section 15(k) requirements within our review (see table 27). SSA did not demonstrate compliance with 6 requirements regarding compensation/seniority of the OSDBU director, reporting to the head or deputy head of the agency, collateral duties of the director, providing assistance on payments, assigning small business technical advisers, and providing advice on the conversion of activities from performance by a small business to performance by a federal employee (in-sourcing). For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. commissioner. However, as of May 10, 2017, the SSA commissioner position remained vacant and the deputy director filled the role of acting commissioner; thus, the responsibilities for appraising the OSDBU director remain delegated to the chief of staff. 15(k)(15), collateral duties: The OSDBU director is also the agency coordinator for the Electronic Subcontracting Reporting System, which collects information on subcontracts for use by the OSDBU and acquisition personnel. The agency officials added that the director does not spend a significant amount of time in this role. 15(k)(6), provide assistance on payments: Agency officials said that the director would provide some limited assistance to small businesses seeking help with payment issues, usually by referring the business to the contracting office. The officials said that the OSDBU does not have formal procedures for providing payment assistance, and in cases in which a small business sought payment assistance from a prime contractor, the OSDBU generally would not get involved. 15(k)(8), assign small business technical advisers: SSA officials noted that the agency has only one acquisition office; therefore, the OSDBU director did not assign technical advisers and the requirement did not make sense in the agency context. However, SSA has a specialist for small and disadvantaged business utilization, who is managed by another executive. Officials said that the specialist is a well-qualified, technically trained, full-time employee and has a principal duty to assist the procurement center representative, but is not assigned by the OSDBU. 15(k)(11), advise on in-sourcing: According to SSA officials, the OSDBU is not generally involved in providing advice on a decision to convert an activity performed by a small business to an activity performed by a federal employee, although the budget office could contact the OSDBU for input on an informal basis. The budget office is responsible for preparing an analysis of proposed in-sourcing. We reviewed policy documents and survey and interview responses and determined that the Office of Small and Disadvantaged Business Utilization (OSDBU) at the U.S. Agency for International Development (USAID) demonstrated compliance with 10 of the 12 section 15(k) requirements within our review (see table 28). USAID did not demonstrate compliance with 2 requirements regarding collateral duties of the OSDBU director and responding to a notification from a small business of an undue restriction on its ability to compete. For more information about our methodology, see appendix I. For more information about each statutory provision, see appendix II. bringing the responsibility under the OSDBU made sense from an agency perspective because the OSDBU assists disadvantaged businesses. 15(k)(17), respond to notification of an undue restriction on ability of small business to compete: Agency officials stated that on receiving such a notification, the director would contact the procurement officer involved with the solicitation and ensure that the small business was aware of available options to address the situation. The officials stated that the director would not inform the agency’s advocate for competition. They explained that discussing the issues with personnel directly involved with the solicitation was more effective than informing the advocate for competition. In the future, the officials indicated that the OSDBU may take steps to involve the agency advocate for competition in the process as required in section 15(k)(17). In addition to the contact named above, Andy Pauline (Assistant Director), Janet Fong and Meredith Graves (Analysts in Charge), Benjamin Adrian, Pamela Davidson, Hannah Dodd, Ricki Gaber, Farrah Graham, Barbara Roesmann, Jessica Sandler, Jena Sinkfield, and Tyler Spunaugle made key contributions to this report.
Section 15(k) of the Small Business Act requires federal agencies with procurement powers to establish an OSDBU to advocate for small businesses. The National Defense Authorization Act for Fiscal Year 2013 established additional requirements for OSDBUs and required SBPAC to review OSDBU compliance with section 15(k) requirements. GAO was asked to review compliance with selected requirements of section 15(k). GAO examined (1) the extent to which selected federal agencies demonstrated compliance with 13 requirements for OSDBUs and (2) SBPAC review process results. GAO selected a sample of 10 agencies, based on contracting obligations, to review a reporting requirement for OSDBU directors. For the other 12 requirements, GAO surveyed OSDBU directors at 24 agencies, selected based on contracting obligations (100 percent response rate). To review and augment survey responses, GAO also analyzed guidance and documents and interviewed OSDBU directors. Demonstrated compliance with selected section 15(k) requirements for the Office of Small and Disadvantaged Business Utilization (OSDBU) varied across the 24 agencies GAO surveyed. Five agencies demonstrated compliance with all the requirements, four agencies demonstrated compliance with all but one requirement, and 15 agencies did not demonstrate compliance with two or more requirements. Examples of GAO findings include the following: Four OSDBU directors did not report directly to the agency head or deputy (the one requirement for which GAO reviewed only 10 agencies). Five agencies did not demonstrate compliance with a requirement for collateral duties of OSDBU directors. Six agencies did not demonstrate compliance with a requirement for compensation and seniority of OSDBU directors. Twenty-three agencies demonstrated compliance for four requirements on OSDBU director experience, supervisory duties of the OSDBU director, identifying and addressing significant bundling of contracts (consolidation of two or more procurement requirements into a solicitation for a single contract), and providing assistance on payments. Fifteen agencies demonstrated compliance with a requirement to respond to notifications that solicitations unduly restricted the ability of small businesses to compete for contracts. Noncompliance with section 15(k) requirements may limit the extent to which an OSDBU can advocate for small businesses. For example, OSDBU influence in agencies might be limited if directors reported to lower levels of management. Directors with other duties might be less able to carry out all section 15(k) duties. Results of the Small Business Procurement Advisory Council's (SBPAC) annual review of compliance with section 15(k) requirements differed from GAO's assessments. The Small Business Administration (SBA) chairs SBPAC, and its members are nearly all OSDBU directors. All agencies in the most recent review scored 94–98 percent. But where GAO's review considered the same section 15(k) requirements as the SBPAC review, GAO found some agencies had not demonstrated compliance with multiple requirements. Other than reviewing documentation agencies choose to provide, SBA's guidance for the review panel does not indicate any other means by which reviewers could obtain or clarify information. GAO's review included follow-up discussions with agency officials to obtain or clarify information. SBA has been developing a new review process, but preliminary information GAO reviewed indicates the process will be similar to the current one. According to federal standards for internal control, management should use quality information to make informed decisions. Under the new process, the review results (which SBA uses in another process that determines an agency's overall grade for small business contracting) also will carry twice as much weight as under the current process—underscoring the importance of the review results. A new review process substantially similar to the old one (especially in relation to guidance) may not provide a reliable indicator of OSDBU compliance with section 15(k) requirements. GAO makes 20 recommendations, including that agencies not demonstrating compliance with section 15(k) requirements comply or report to Congress on why they have not, and that SBA should provide more detailed guidance for the new SBPAC review process than exists for the current process. Agency responses to the recommendations varied. As discussed in the report, GAO maintains that implementation of its recommendations is warranted.
To encourage employers to sponsor retirement plans for their employees, the federal government provides preferential tax treatment under the Internal Revenue Code (IRC) for plans that meet certain requirements. In addition, the Employee Retirement Income Security Act of 1974 (ERISA), as amended, sets forth certain protections for participants in private- sector retirement plans, including fiduciary responsibilities that may apply to plan sponsors, which establish certain standards of conduct for those that manage employee benefit plans and their assets. Small employers may choose a plan for their employees from one of three categories: employer-sponsored IRA plans; defined contribution (DC) plans; and defined benefit (DB) plans (often referred to as traditional pension plans). Employer-sponsored IRA plans, which can be either Savings Incentive Match Plans for Employees (SIMPLE) or Simplified Employee Pension (SEP) plans, generally allow employers and, in SIMPLE IRA plans, employees, to make contributions to separate IRA accounts for each participating employee. Employers generally have fewer administration and reporting requirements compared to other types of plans. The second plan category—DC plans—which includes 401(k) plans, allows employers, employees, or both to contribute to individual employee accounts within the plan. DC plans tend to have higher contribution limits for employees than employer-sponsored IRA plans; however, they also have more reporting requirements and other rules; for example, they may be subject to requirements for nondiscrimination testing or top-heavy testing. The third category is DB plans, which promise to provide a specified retirement benefit to employees; the employer is generally responsible for funding the plan. Over the years, Congress has responded to concerns about lack of access to employer-sponsored retirement plans for employees of small employers with legislation to lower costs, simplify requirements, and ease administrative burden. For example, the Revenue Act of 1978 and the Small Business Job Protection Act of 1996 established the SEP IRA plan and the SIMPLE IRA plan respectively, featuring fewer administration requirements than other plan types. The Economic Growth and Tax Relief Reconciliation Act of 2001 (EGTRRA) also included a number of provisions that affected small employers, which were made permanent by the Pension Protection Act of 2006 (PPA). The PPA also established additional provisions that support retirement plan participation by rank- and-file employees, such as automatic enrollment. Federal agencies also play a role in fostering retirement plan sponsorship by small employers. To help encourage sponsorship, federal agencies conduct education and outreach activities and provide information about retirement plans for small employers. Labor, IRS, and SBA—which maintains an extensive network of field offices—have collaborated with each other and with national and local organizations to develop information on small employers retirement plans and conduct outreach with small employers. Various private-sector service providers—from individual accountants, investment advisers, recordkeepers, and actuaries to insurance companies and banks—assist sponsors with their retirement plans. Some sponsors hire a single provider that offers a range of plan services for one fee, sometimes referred to as a “bundled” services arrangement. Other sponsors hire different providers for individual services under an “unbundled” arrangement, paying a separate fee for each service. Plan services include legal, accounting, trustee/custodial, recordkeeping, actuarial (for defined benefit plans), investment management, investment education, or advice. Service providers can also assist with plan administration functions, including any required testing and filing of government reports. We found that the number of employees and average wages greatly influence the likelihood that a small employer will sponsor a retirement plan. Further, our regression analysis using Labor and IRS data found that small employers with larger numbers of employees were the most likely of all small employers to sponsor a plan, as were those paying average annual wages of $50,000-$99,999. Conversely, employers with the fewest employees and the lowest average annual wages were very unlikely to sponsor a plan. A separate analysis we conducted using Labor and IRS data found an overall small employer sponsorship rate of 14 percent in 2009. important to note, however, that this sponsorship rate does not include small employers that sponsor SEP IRA plans because IRS currently does not have a means to collect data on employers that sponsor this plan type. Further examination found that small employers with 26 to 100 employees had the highest sponsorship rate—31 percent—while small employers with 1 to 4 employees had the lowest rate—5 percent (See fig 1). The sponsorship rate cited in this testimony is limited to single employers that sponsor a plan. Consequently, the rate does not include small employers that participated only in multiple employer plans or multiemployer plans, which are outside the scope of this study. We are currently conducting ongoing work on these plan types and their role in the private pension system. According to our analysis of Labor and IRS data, 401(k) and SIMPLE IRA plans were overwhelmingly the most common types of plans sponsored by small employers. Out of slightly more than 712,000 small employers that sponsored a single type of plan, about 86 percent sponsored either a 401(k) or a SIMPLE IRA plan.types sponsored by small employers. Small employers and other stakeholders we interviewed identified various plan options, administration requirements, fiduciary responsibilities, and top-heavy testing requirements as complex and burdensome—often citing these factors as barriers to sponsoring retirement plans or as reasons for terminating them. Plan options and administration requirements: Small employers and other stakeholders said that plan options and administration requirements are frequently complex and burdensome and discourage some small employers from sponsoring a plan. For example, some small employers and retirement experts said that the broad range of plan types and features makes it difficult for small employers to compare and choose a plan that best meets their needs. Some stakeholders also described the administrative burden on small employers of plan paperwork, such as reviewing complicated quarterly investment reports or complying with federal reporting requirements—like those associated with required annual statements—as particularly burdensome. Fiduciary responsibilities: A number of stakeholders indicated that understanding and carrying out a sponsor’s fiduciary responsibilities with respect to managing or controlling plan assets presents significant challenges to some small employers.found the selection of investment fund choices for their plans particularly challenging. Further, a number of stakeholders said some small employers may not have an adequate understanding of their fiduciary duties and are not always aware of all their responsibilities under the law. Top-heavy requirements: Top-heavy requirements are more likely to affect smaller plans (those with fewer than 100 participants) than larger ones, according to IRS. A number of stakeholders said compliance with requirements is often burdensome and poses a major barrier to small According to some experts, some small employer plan sponsorship. employers with high employee turnover may face an even greater likelihood of becoming top-heavy as they replace departing employees while key employees—such as business owners or executives—continue to contribute to the plan. A number of stakeholders stated that compliance with top-heavy rules is confusing and can pose significant burdens on some small employers. For example, some retirement experts said that small employers whose plans are found to be top-heavy may encounter a number of additional costs in the effort to make their plans compliant. These plans can incur additional costs associated with hiring a plan professional to make corrections to plan documents and instituting a minimum top-heavy employer contribution for all participating rank-and- file employees. While sponsors can avoid top-heavy testing by adopting a safe harbor 401(k) plan that is not subject to top-heavy requirements, experts pointed out that the employer contributions required for such plans may offset the advantages of sponsoring such a plan. Federal agencies provide guidance that can assist small employers in addressing some of the challenges they face in starting and maintaining retirement plans. Labor and IRS, often in collaboration with SBA, have produced publications, conducted workshops, and developed online resources, among other efforts, to assist small employers. However, a number of stakeholders, including the IRS Advisory Committee on Tax Exempt and Government Entities, indicated that many small employers are unaware of federal resources on retirement plans, which may, in part, be due to difficulties in finding useful, relevant information across a number of different federal websites. For example, IRS’s Retirement Plans Navigator, a web-based tool designed to help small employers better understand retirement plan options, is located on a separate website from the rest of the agency’s online plan resources for small employers. Furthermore, Labor and IRS each present retirement plan information separately on their respective websites. Neither agency maintains a central web portal for all information relevant to small employer plan sponsorship, though such portals exist for federal information resources in other areas such as healthcare. Small employers that lack sufficient financial resources, time, and personnel, such as smaller or newer firms, may be unwilling or unable to sponsor plans. Financial resources: Small employers, especially those with lower profit margins or an unstable cash flow, could be less willing or less able to sponsor a retirement plan. One-time costs associated with starting a plan and the ongoing costs involved with maintaining the plan—as well as any requirement to match employee contributions or make mandatory contributions to an employee’s account—were cited as barriers to plan sponsorship. Further, small employers we interviewed stated that general economic uncertainty makes them reluctant to commit to such long-term expenses and explained that they needed to reach a certain level of profitability before they would consider sponsoring a plan. Time and personnel: Some small employers stated they may not have sufficient time to administer a plan themselves or lacked the personnel to take on those responsibilities. Further, small employers may not have time to develop the expertise needed to investigate and choose financial products, select the best investment options, or track their performance. For example, one small employer described how business owners without the financial expertise to compare and select from among different plan options would likely find the experience intimidating. Some small employers we interviewed stated that they may be less likely to sponsor a retirement plan if they do not perceive sufficient benefits to the business or themselves. For example, several small employers stated that their firms sponsored plans in order to provide owners with a tax- deferred retirement savings vehicle and one employer described how the firm annually assesses the plan to determine if it continues to benefit the owners. Additionally, a number of small employers stated that employees prioritized healthcare benefits over retirement benefits. Some small employers, such as those who described having younger or lower paid workforces, stated that their employees were less concerned about saving for retirement or were living paycheck to paycheck and did not have funds left over to contribute to a plan. As a result, both types of workers were not demanding retirement benefits. A number of small employers indicated that they use plan service providers to address various aspects of plan administration, which enabled them to overcome some of the challenges of starting and maintaining a plan. For example, one employer noted that her business would not have the time or the expertise to administer its plan without the help of a service provider. While some service providers said they offer affordable plan options and some small employers said the fees service providers charge were affordable, others said they were too high. Further, some stakeholders pointed to other limitations of using service providers, such as the difficulties of choosing providers, setting up a new plan through a provider, switching from one provider to another, as well as the significant responsibilities that may remain with the sponsor, such as managing plan enrollments and separations and carrying out their fiduciary duties, where applicable. Stakeholders proposed several options to address some of the administrative and financial challenges that inhibit plan sponsorship. These options included simplifying plan administration rules, revising or eliminating top-heavy testing requirements, and increasing tax credits. Simplify plan administration requirements: Some stakeholders suggested options that could simplify plan administration requirements. Options included reducing the frequency of statements sent to plan participants and allowing some required disclosures to be made available solely online. IRS officials stated that the agency is also considering proposals to replace a requirement for some interim amendments which stakeholders have identified as a burden for some small employers—with a requirement for notices to be sent directly to employees, which would reduce the number of times plan documents must be amended and submitted to IRS. Revise or eliminate top-heavy testing: A number of stakeholders proposed revising or eliminating top-heavy testing requirements to ease administrative and financial burdens. For example, representatives of the accounting profession told us that top-heavy testing is duplicative because other plan testing requirements help detect and prevent plan discrimination against rank-and-file employees. Representatives of a large service provider told us that lack of plan participation or high turnover among a business’ rank-and file employees frequently cause plans sponsored by small employers to become top-heavy. When statutes and regulations change, some sponsors may be required to modify plan documentation and submit it to IRS. Each year since 2004, IRS has published a cumulative list of changes in plan requirements that must be incorporated by plan sponsors. See, for example, IRS Notices 2011-97. Increase tax credits: Some stakeholders believed that tax credits, in general, are effective in encouraging plan sponsorship, but other stakeholders said that the current tax credit for starting a plan is insufficient. A national organization representing small employers cited tax credits as a top factor in an employer’s decision to sponsor a plan, adding that an employer’s willingness to start a plan depends, to some degree, on the extent to which the tax credit offsets plan-related costs.Similarly, some small employers stated that larger tax credits could ease the financial burden of starting a plan by offsetting plan-related costs. Additionally, one small employer said the incentive needs to be larger as sponsorship costs can amount to $2,000 or more per year. Numerous stakeholders agreed that the federal government could increase education and outreach efforts to inform small employers about plan options and requirements; however, opinions varied on the appropriateness of the federal government’s role in these efforts. Officials of a service provider to small employers stated that, because clients are generally not aware of the retirement plan options available to them, the federal government should offer more education and outreach to improve awareness of the types of plans that are available and the rules that apply to each. Several small employers also offered ideas. For example, a small employer said the federal government should focus education and outreach efforts on service providers instead of on small employers. Conversely, some small employers said the federal government should have a limited role or no role in providing education and outreach efforts. Domestic pension reform proposals from public policy organizations, as well as practices in other countries, include features such as asset pooling that could reduce the administrative and financial burdens of small employers. For example, one domestic proposal calls for the creation of a federally managed and federally guaranteed national savings plan. Under this proposal, participation in the program would generally be mandatory for workers; both employers and employees would contribute to the plan; and plan funds would be pooled and professionally managed. By pooling funds, plan administration would be simplified and administrative costs and asset management fees would be reduced. In addition, Automatic IRAs—which are individual IRAs instead of employer-sponsored plans—are another proposal that draws from several elements of the current retirement system: payroll-deposit saving, automatic enrollment, and IRAs. Such a proposal would provide employers who do not sponsor any retirement plans with a mechanism that allows their employees to save for retirement. However, as we reported in 2009, such proposals pose trade-offs. For example, although a proposal that mandates participation would increase plan sponsorship and coverage for workers, employers might offset the resulting sponsorship costs by reducing workers’ wages and other benefits. Retirement systems in other countries also use asset pooling and other features that help reduce administrative and financial burdens for small employers. For example, as we previously reported, the predominant pension systems in the Netherlands and Switzerland pool plan assets into pension funds for economies of scale and for lower plan fees. The United Kingdom’s National Employment Savings Trust (NEST) features low fees for participating employers and employees and default investment strategies for plan participants. With a significant portion of the private-sector workforce not covered by a pension plan at any one time, retirement security remains a critical issue for our nation. Based on the limited data available, we found the rate of plan sponsorship among small employers, a segment of the economy which employs about one third of all private sector workers, was only 14 percent in 2009. Although one would expect that the high churn rate of small business formation and dissolution would impede small employer plan sponsorship, it also means that many millions of workers in this sector are without access to an employer-sponsored retirement savings plan. Thus, while remaining sensitive to the financial challenges currently facing our nation, expanding coverage among small employers should be an important consideration of national strategies seeking to strengthen the pension component of retirement income security. Our discussions with small employers and other stakeholders identified a variety of challenges small employers face in sponsoring retirement plans. One initial problem is the inability of small employers to easily obtain useful information on how to establish and maintain plans. Although Labor and IRS already provide small employers with considerable online information about retirement plans, information is scattered across multiple federal websites and portals in a largely uncoordinated fashion, making it difficult for busy employers to navigate and locate what they need. However, even if federal information about retirement plans were more accessible to small employers, our interviews with small employers identified a number of other significant challenges to plan sponsorship, including plan administration requirements that are perceived to be unduly complicated and burdensome, not having sufficient financial and personnel resources to sponsor a plan, and insufficient incentives to create and maintain a plan. These challenges, while very real, are also complex and in many instances may not lend themselves to easy answers. Because the expertise to address these issues is spread across multiple agencies and departments that may not always communicate or work together effectively on these issues, there is the potential that inertia and other competing priorities will push these issues onto the back burner. The report we are issuing today recommends the creation of a multiagency task force, to be overseen by the Department of Labor, that would explore and analyze these challenges in greater detail, including ways to make information more accessible, to streamline reporting and disclosure requirements in a thoughtful manner, and to identify the appropriateness and effectiveness of existing and proposed tax incentives and plan designs to boost sponsorship among small employers. Such a task force could help jump-start sustained action on what we consider to be an essential element of our nation’s retirement security challenge and initiate a national dialogue on the critical issues of pension coverage Finally, federal agencies’ ability to address the challenges to small employer plan sponsorship depends in part on the availability of relevant, timely, and complete data. During our work in estimating the extent of small employer plan sponsorship, we found that complete data on small employer plan sponsorship did not exist because IRS did not have the means to collect information on employers that sponsor SEP IRA plans. Although there are about 1.5 million SEP IRAs, many of these may be sponsored by larger businesses, and we simply do not know the distribution of these plans across all employers. Without a complete picture of small employer plan sponsorship rates, agencies may find it difficult to effectively target their research and outreach efforts. Thus, in our report we also recommend that the Secretary of the Treasury direct the Commissioner of the Internal Revenue Service to consider modifying existing tax forms, such as Forms W-2 or 5498, to gather complete and reliable information about these plan types. Although the challenges that small employers face in sponsoring plans are significant, they can be addressed and with appropriate federal action and cooperation, as well as assistance from the service provider community. While the Department of Treasury, IRS, Labor, SBA, and the Department of Commerce generally agreed with our findings and conclusions, Labor disagreed with our recommendation to create a single web portal for federal guidance on retirement plans for small employers. Because federal resources are scattered across different sites, we believe consolidating plan information onto one web portal can benefit small employers. A complete discussion of our recommendations, Labor’s comments, and our response are provided in our full report. Chairman Kohl and Ranking Member Corker, and Members of the Committee, this concludes my prepared remarks. I am happy to answer any questions that you or other members of the committee may have. For further questions on this testimony, please contact me at (202) 512-7215. Individuals making key contributions to this testimony include Edward Bodine, Kun-Fang Lee, David Lehrer, and David Reed. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the challenges that small employers face when sponsoring retirement plans for their workers. About 42 million workers, or about one third of all private-sector employees, work for employers with less than 100 employees and recent federal data suggest many of these workers lack access to a work-based retirement plan to save for retirement. An estimated 51 to 71 percent of workers at employers with less than 100 workers do not have access to a work-based retirement plan, compared to an estimated 19 to 35 percent of those that work for employers with 100 or more workers. Small employers face a number of barriers to starting and maintaining a plan for their workers. Certain characteristics associated with small employers may contribute to the challenges of sponsoring a plan. For example, in 2008, we reported on challenges that can limit small employer sponsorship of Individual Retirement Arrangement (IRA) plans, including administrative costs, contribution requirements, and eligibility based on employee tenure and compensation, among others. Additionally, federal data suggest that about half of all new businesses (nearly all of which are small) do not survive for more than 5 years. This testimony is based on our report released today that examines (1) the characteristics associated with small employers that are more or less likely to sponsor a retirement plan, (2) challenges small employers face in establishing and maintaining a retirement plan, and (3) options that exist to address those challenges and increase small employer sponsorship. We found that the likelihood that a small employer will sponsor a retirement plan largely depends on the size of the employer’s workforce and the workers’ average wages. Small employers, retirement experts, and other stakeholders also identified a number of challenges— such as plan complexity and resource constraints—to starting and maintaining retirement plans. In addition, stakeholders offered options for addressing some challenges to plan sponsorship, which included simplifying federal requirements for plan administration and increasing the tax credit for plan startup costs. Although Labor, IRS, and the Small Business Administration (SBA) collaborate in conducting education and outreach on retirement plans, agencies disseminate information online through separate websites and in a largely uncoordinated fashion. In addition, IRS currently does not have the means to collect information on employers that sponsor a certain type of IRA plan. As a result of our findings, we are recommending efforts for greater collaboration among federal agencies to foster small employer plan sponsorship and more complete collection of IRA plan sponsorship data.
Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to provide the results of our assessment of the securities industry’s readiness to trade stocks using decimal prices. In 1997, your Subcommittee held a series of hearings on a proposed amendment to the Securities Exchange Act of 1934 that would have directed the Securities and Exchange Commission (SEC) to require, within 1 year of enactment, that securities trading be in dollars and cents instead of the fractional increments of a dollar, such as eighths and sixteenths, used today. Shortly after those hearings, various exchanges and markets indicated that they were committed to converting to decimal trading, and Congress took no further action on the legislation. Subsequently, after market participants indicated that the conversion to decimal trading should be postponed until after 2000, you asked us to determine whether anything could be done to accelerate that time frame. With a few exceptions, the majority of the exchanges; market support organizations, such as those that transfer payments and securities after a trade; and securities firms of various sizes that we contacted had made limited progress toward converting to decimal trading. Most organizations were extensively involved in modifying their systems to be ready for the date change in 2000 and the impending implementation of the new European Monetary Union (EMU), both of which have dates that cannot be changed. Market participants expressed strong concerns that attempting to convert to decimal trading while these and other information technology initiatives were under way represented a great risk to the success of any of them and to the industry as a whole. After consulting with SEC in January 1998, the securities markets established a working group to expand on previous industry discussions of decimal trading and to begin developing decimal conversion standards and establishing time frames for completing the conversion to decimals. The time frames this group has proposed envision that decimal trading would begin during the third quarter of 2000. completed April 16, 1998, when the Securities Industry Association (SIA) agreed to act as the industry focal point for the implementation of decimal trading. The industry has also been working on the other elements, but additional work remains for SEC and the industry to reach consensus on the approach for the implementation, as well as on an implementation date, milestones, and technical standards and specifications. Further, because decimal trading may have significant effects on U.S. markets and market participants, assessing and addressing any such effects would be an important part of preparing for and planning its implementation. Potential issues include ensuring that adequate processing and communications capability exists in the industry to support decimal trading and evaluating any changes needed in market regulations to ensure that market operations remain fair and orderly. In doing this work, we contacted officials from SEC and eight major exchanges and markets that trade stocks and options in the United States as well as four support organizations for these markets, such as those that transfer payments and securities after a trade. We also contacted representatives from 12 securities firms of various sizes as well as 2 organizations that provide information technology support services for hundreds of additional firms. In addition, we discussed the readiness for decimal trading with officials of two major market information vendors. To obtain information from an exchange that had recently undergone a transition to decimal trading, we contacted the Toronto Stock Exchange (TSE). We also discussed decimal trading with various securities market experts, including academics who had conducted relevant studies. We did our work between February and April 1998, in accordance with generally accepted government auditing standards. with others would require less time and fewer resources than many of the other initiatives already under way in the industry, such as the work being done for the Year 2000 date change. Of the eight stock and options exchanges we contacted, four had begun system modifications. The farthest ahead was the New York Stock Exchange (NYSE), which began its conversion 2 years ago to prepare for listing and trading the ordinary shares of foreign companies using decimal prices denominated in foreign currencies instead of the American Depositary Receipts currently traded. NYSE officials told us they plan to complete system modifications by September 1998, when internal testing is to begin. They said their conversion efforts do not involve the accounting and processing systems used by NYSE specialists. One official said that NYSE has not formally attempted to determine the readiness of specialists’ systems for decimal trading, but he indicated that some firms have begun converting. The Nasdaq Stock Market, Inc. (Nasdaq), the American Stock Exchange, and the Chicago Stock Exchange have begun replacing older systems with newer technology that will be capable of decimal trading, and the new systems are scheduled for completion and internal testing by the third quarter in 1999. NASD Regulation, Inc. would not be ready until March 2000. The other four exchanges generally had done only internal assessments to determine which systems would require modifications. Of the 12 securities firms we contacted, only 1 had modified and internally tested its systems for trading in decimals. Officials of this firm said they had replaced their older systems with newer ones that were capable of processing decimal prices. Many smaller securities firms rely on third-party firms to perform their data processing, but the two data processing firms we contacted—which process information for hundreds of medium and small securities firms—had not begun to modify their systems. Although most organizations we contacted had not begun converting their systems for decimal trading, many reported having at least conducted an informal inventory or assessment of their systems to determine which ones would be affected by such a conversion. The readiness of the four market support organizations we contacted varied. Officials of the largest U.S. securities depository organization, the Depository Trust Company, which maintains records of securities holdings for securities firms, custodian banks, and their customers, said they had only one affected system and it was already decimal ready. The Securities Industry Automation Corporation, which operates the NYSE and American Stock Exchange trading systems, also operates the systems that make up the National Market System. These systems allow quotes and orders to be routed among the exchanges in New York and other exchanges or dealers across the country. According to Corporation officials, those systems that route quotes are ready; those that route orders will be ready in the third quarter of 1999. Officials of the National Securities Clearing Corporation, which is the largest clearing organization for U.S. stocks, indicated that decimal trading modifications to the systems used for exchange-listed stocks were completed in March 1998, but the modifications for systems used for the stocks traded on Nasdaq are not expected to be complete until May 1998. Officials at the Options Clearing Corporation, which performs clearing functions for options trading, told us that they had not yet begun systems modifications. The time and cost estimates for converting to decimal trading offered by exchanges, support organizations, and securities firms varied. The estimates generally ranged from 2 to 6 months and no higher than $10 million but usually closer to $5 million or less. Developing cost estimates was difficult for many organizations, because they (1) did not know the specifications, (2) had not yet started converting, and (3) had to consider the impacts of other information technology initiatives already under way at their organizations. An industrywide study done for SIA found that the level of effort required for securities market participants to ready themselves for decimal trading was less than that required for other ongoing information technology efforts. For example, the study estimated that decimal trading conversion industrywide would require slightly over 300 person years, the equivalent of about $170 million, which is less than 5 percent of the estimated 8,800 person years and about $5 billion for Year 2000 work. Despite the less involved effort needed to convert individual firm systems to decimals, securities market participants told us the industry is unlikely to be able to implement decimal trading before 2000. The primary reasons cited were inadequate time and resources, given the demands of the Year 2000 effort and other information technology initiatives already under way. Representatives of almost all of the exchanges, support organizations, and securities firms we contacted indicated that they would not have sufficient time and resources available to both modify and test systems necessary for decimal trading until Year 2000 efforts were completed. Because decimal trading will affect all market participants’ systems, they said that systems changes would have to undergo comprehensive streetwide testing similar to that required for the industry’s Year 2000 effort. Nasdaq officials noted that the Year 2000 tests will be complex and that developing the testing plan has required months of effort by large numbers of staff across various organizations. NYSE officials indicated that most organizations can conduct systems testing on only 2 weekends a month, because processing associated with options expiration and the month’s end is done during the other weekends. Industry officials noted that this is especially true for medium or smaller securities firms, which lack dedicated testing systems and can test only at those times when necessary business processing is not being done. TSE officials told us that extensive testing was done as part of the exchange’s conversion to decimals and was a major factor in its smooth transition. practice and would make identifying and correcting any resulting processing errors very difficult. Officials at one exchange also indicated that making modifications for both 2000 and decimals would make it difficult for them to certify that their systems were Year 2000 compliant. Further, our work reviewing the Year 2000 efforts of numerous federal agencies and other entities has generally found that organizations are avoiding the simultaneous implementation and testing of multiple major systems changes to mitigate the risk of malfunctions. Officials from most of the organizations advised us that obtaining the necessary internal and external resources for conducting information technology projects is extremely difficult, largely because such resources are already working on either Year 2000 efforts or the other industrywide initiatives. For example, representatives at four organizations said that they intended to use the same staff to convert to decimals that they are using for Year 2000 work now. Officials from one large securities firm said that the work entailed in converting systems for decimal trading requires an understanding of internal systems and the information flows among them and cannot be done by less experienced staff or external resources. Staff capable of performing this work are already engaged at their firm doing Year 2000 and EMU modifications. Officials at a smaller securities firm noted that unlike decimal conversion, many of these other initiatives stem from regulatory mandates or have externally fixed implementation dates, such as 2000 and the EMU target in 1999. The readiness of the securities industry to convert to decimal trading has been hampered by the lack of certain key elements necessary for successful implementation. One of these elements was completed on April 16, 1998, when NYSE informed SEC that SIA had agreed to act as the industry focal point for the implementation of decimal trading. The industry had also begun work on the other elements, however, no SEC and industry consensus has been reached on the technical standards and specifications to be used by individual organizations in converting their systems or on implementation plans and time frames. to focus the industry’s efforts. SIA officials told us that they would be willing to perform this role for the industry using the same committee structure and organization that they have used for the industry’s Year 2000 preparations; and on April 16, 1998, agreed to do so. Developing an industrywide consensus on standards and specifications needed for decimal trading involves determining how many decimal places each organization’s systems should be prepared to recognize and how rounding of prices would be done. Officials at six securities firms and one information processing firm told us that they would not begin converting their systems until standards and specifications for decimal trading had been established. Obtaining consensus from a broad range of organizations affected by these standards will also be important. For example, officials from one securities firm told us that after a set of standards is proposed, a working group of systems experts should provide input before such standards are finalized. They said that although the specifications for decimal trading will not be that complex, ensuring that they are workable for all organizations will require review by technology officials throughout the industry. Developing an overall plan for decimal trading involves reaching consensus on how the transition will occur and what the implementation date will be. Officials of at least six organizations emphasized that determining the approach for implementing decimal trading was an important step. Some suggested that the approach might entail a phasing in of selected stocks; others suggested that all stocks and markets could convert at once. In addition, officials at many of the exchanges and securities firms we contacted emphasized the importance of establishing a target date for implementation. Six organizations indicated that they had delayed the start of any efforts to convert their systems for decimal trading, because such a date had not been set. Developing individual organization plans includes designating a project manager and identifying technical and management points of contact in core business areas, as suggested in our Year 2000 assessment guide. discuss these issues. In January 1998, SEC requested that NYSE convene a small working group to propose a plan for the industry’s conversion to decimal trading. On February 27, 1998, representatives from various exchanges began holding meetings to discuss the timing and standards for implementing decimal trading in U.S. markets. This group has developed preliminary standards for converting systems for decimal trading, including specifying prices with 2 decimal places. The group also suggested that system changes to accommodate decimals should be able to handle at least 4 places as a precautionary measure for future contingencies. In addition, the group has offered a potential timetable for implementing decimal trading that would call for initial testing among individual participants during January to May 2000, industrywide testing through July 2000, and implementation beginning September 2000. Further, the group has discussed a plan to revert to fractions from decimals if serious processing problems arise when decimal trading begins. The group presented the proposed plan to SEC for approval in April 1998. SEC officials told us that they have asked the group to adjust the time frames in an effort to have implementation of decimal trading begin by June 2000. However, officials at two firms we talked to said that even being ready by third quarter 2000 might not be possible. Although the members of this group are important to the implementation of decimal trading in the securities industry and have agreed on specifications and a testing timetable, additional work remains to achieve industrywide consensus on these elements. Assessing and planning for the effects of decimal trading on investors, exchanges, securities firms, and the markets themselves could help ensure that implementation is successful. Converting to decimal trading is generally expected to result in lower spreads for stocks, although many market participants were skeptical that the savings for investors anticipated by some advocates would actually be achieved. Market participants also expressed concerns about the effect of decimal trading on certain aspects of market operations, such as processing capacity and market rules. Predicting the specific savings that may result by converting to decimal trading is difficult, because the conversion may also affect market variables, such as the number of shares offered to buy or sell, commissions, trading patterns, and individual investor behavior. However, to the extent that decimal trading reduces spreads, public investors potentially could save money on their trades. Advocates of decimal trading have estimated that savings for investors could be considerable if the conversion results in lower minimum price change increments (tick size) and subsequently lower spreads. Estimates of the annual savings possible from a conversion to decimal trading in U.S. markets range from $300 million to $5 billion. One simply derived estimate used the 250 billion shares traded in 1996, adjusted for 100 billion shares traded that did not involve dealers, and estimated that U.S. investors would benefit by $1.5 billion for every 2 pennies that spreads decline. Another estimate was derived from the experience of TSE in its decimal conversion. According to information provided by TSE, spreads on the largest stocks declined 37 percent after the conversion to decimals. One academic researcher estimated the savings for investors from this change were about $216 million (Canadian dollars) each year. Projecting these results to U.S. markets, he estimated that decimals could save investors $2.25 billion each year on the New York and American Stock Exchanges. influenced by many factors, such as the liquidity of the stock or the investor demand for it. For some stocks, SEC officials anticipated that spreads may actually be wider than before, because the natural spread for the stock may be between two fixed minimum price increments. Furthermore, a securities firm official said that stock trades do not always occur at the minimum possible spread because of normal fluctuations in supply and demand. The systems capacities of various market participants may be strained if decimal trading causes similar increases in processing and communication volumes, as the change to 1/16ths did. (See app. II.) Market participants told us that converting to decimal trading is likely to increase processing and communication volume, because such increases resulted when the tick size was reduced to 1/16ths in June 1997. Every exchange and large securities firm we contacted indicated that their systems experienced increased processing volumes following the reduction in the minimum trading increment from 1/8th to 1/16th. For instance, according to its officials, NYSE experienced as much as a 40-percent increase in message traffic following the conversion to 1/16ths. Officials attributed these increases to the doubling of the number of fractional increments of a dollar at which trades could be executed from 8 to 16, which produced more quotes and more trades of smaller size. They anticipate similar increases from a conversion to decimals, which could result in 100 such increments if trading is done in pennies. Over the course of the last year, almost all of the various exchanges and market participants we contacted had experienced information processing problems, which most attributed to these increased processing and communication volumes. Although some organizations told us that they experienced problems right after the tick size reduction, the problems became most severe during October 1997, which saw record trading volumes on U.S. markets. That month, both NYSE and Nasdaq traded over 1 billion shares in 1 day. operation of this system reported that a new, higher capacity network became fully operational on January 2, 1998, and, that, with two exceptions, they expected all data recipients to have migrated to the new network by the end of May 1998. The Nasdaq market also experienced problems in October, when one of the systems used to provide confirmation of trades went down for several hours. Nasdaq officials reported that they made changes to correct these problems the same day, and no operational effects on trading resulted. Many of the securities firms we contacted also experienced processing related problems in 1997. One representative said that his firm spent $10 million making its clients whole as a result of processing problems it had with its internal systems and those of the various markets during those high-volume trading days. As a result of these problems, market participants indicated that the industry will have to address capacity issues for trading both equities and options if the implementation of decimal trading is to be successful. Options exchange officials said that capacity concerns may be even more important for options trading because of the large quote traffic that options generate for exchanges, market participants, and vendors. Many of the organizations we contacted had plans to, or had already begun to take steps to, increase their systems capacities. Because of the problems experienced after the reduction in tick size to 1/16ths, one exchange official indicated that converting to decimal pricing before an industrywide capacity study was made would be unwise. Two of the organizations that are responsible for managing major national market systems have recently commissioned an outside consultant to develop a comprehensive capacity planning process for those systems. SIA officials advised us that a similar study is expected to be commissioned for assessing the impact of a conversion to decimal trading later in 1998. Converting to decimal trading could also affect the functioning of various market rules. Some of the first rules affected would be those that establish the minimum allowable tick size in the various markets. Exchange and Nasdaq rules that denominate the minimum tick on their markets in fractions (usually 1/16) would at least have to be converted to decimals. Also, the appropriate tick size may be less than the fractional equivalent of the existing minimum tick (0.0625), or the rules could specify no minimum and allow tick size to be set by competition. Among the rules most affected by the smaller tick sizes that decimal trading could provide are those that stipulate order priority. Market participants expressed concerns that ticks approaching pennies could increase the prevalence of “order-jumping.” Order-jumping occurs when a trader submits an order that improves the price by a small amount and thus obtains priority over any limit orders waiting to be executed. The investors whose limit orders then go unexecuted either do not trade or must resubmit their orders at less advantageous prices. As a result, the use of limit orders may be discouraged over time, and may reduce market liquidity and make markets less transparent. SEC and exchange officials told us that this issue will have to be assessed, and revised rules may be needed to mitigate its impact. For example, one way to protect investors that submit limit orders is for the exchanges and Nasdaq to establish rules that require professional traders wishing to trade ahead of their customers’ orders to submit such orders at a higher increment than the minimum increment used for trading. Other rules that market participants indicated could be affected by decimal trading are those requiring that trades by all exchanges or dealers be executed at the best prices prevailing across markets. For example, officials at one exchange told us that if spreads are as low as a penny, requirements that trigger automatic executions at the best prices will have to be changed to prevent manipulation. This could occur when a trader posts a quote for, or trades, a small volume of stock in one market to influence the prices in another where he intends to simultaneously trade a larger volume of stock. Other participants noted that with penny ticks, conducting trades that affect the functioning of the short sale rule would be easier. Currently, trades conducted for the purpose of selling a stock short are allowed to be executed only if the last trade occurred at a higher price than the one prior (an uptick). With smaller ticks, manipulating the market to ensure that such a higher priced trade occurs before selling short would be less costly and easier to accomplish. initially be mandated and assessed before additional stocks are included and further tick size reductions are permitted. SEC officials told us that assessing market effects is always difficult, even during a phase-in period. They said that they have not endorsed a phased implementation approach, although such an approach may help ensure that any systems-related or technical issues are corrected before decimal trading for all stocks occurs. They added that if the industry requests phased implementation, the phase-in period should be short and specifically set, and not used to unnecessarily extend the process. The continued health and smooth functioning of U.S. securities markets is vital to the nation’s economy and depends on the industry making Year 2000 changes successfully. Attempting a conversion to decimal trading before Year 2000 changes are tested and implemented increases the risks that securities industry systems would fail and adversely affect markets and investors. Achieving the potential benefits of decimal trading for investors before 2000 does not appear worth the risk. SEC and the securities industry have been working on several elements that are necessary to help ensure the successful implementation of decimal trading as soon as possible after January 1, 2000. However, additional work remains to obtain industrywide consensus on the plan and targeted implementation date; the standards and specifications; and the schedule for internal, point-to-point, and industrywide testing. Obtaining final agreement on these elements requires detailed planning for all the entities in different industry segments, including the stock and options markets, supporting organizations, securities firms, and processing and market data dissemination firms. Assessing and preparing for the potential effects of decimal trading on ongoing market operations would increase the likelihood that the conversion will be successful. Such effects might include increased strain on industry processing and communication capacity or reduced price ticks and spreads that may require modifications or additions to market rules. Preparing for these effects might involve phasing in certain numbers of stocks at specified minimum ticks, as some market participants suggest, or closely monitoring the effects of trading to be ready to quickly make necessary changes to maintain fair and orderly markets. To help ensure a successful implementation of decimal trading in U.S. equities markets as soon as possible after January 1, 2000, we recommend that the Chairman, SEC, take the following actions: The Chairman should ensure that market participants develop a comprehensive plan for implementing decimal trading. Such a plan should establish interim milestones, including those associated with streetwide testing; set an implementation target date; and delineate technical standards and specifications that receive broad industry support. Although the Securities Industry Association has agreed to oversee and manage the project as it has done for Year 2000, SEC should monitor the plan’s implementation. The Chairman should also ensure that an assessment is conducted of the potential impact of decimal trading on (1) the industry’s processing and communication capacity and (2) the functioning of market regulations and exchange rules so that any necessary changes can be made and a smooth transition to decimal trading can occur. Many organizations had not begun converting to decimals, because standards and specifications and an implementation date had not been established. Most considered Year 2000 and EMU changes their top information technology priorities. Status of effort to convert? Started conversion 2 years ago, and plan to complete internal testing by the end of 1998. Replacing old systems with new that will be decimal ready. Inventory conducted, budgeted, and plan being developed. Some trading systems being replaced that will be decimal ready. Preliminary analysis of project but have not begun process. Waiting for specifications. As part of system modernization, have begun converting. Conducting inventory and have budgeted. Some scoping but have not begun formal process. Conducted inventory, budgeted, and have plan. Had only one affected system, which was already decimal ready. National Market Systems (Securities Industry Automation Corporation) Quoting (CQS) and trade reporting systems (CTS) decimal ready now. Intermarket trading system (ITS) modifications under way. Options-related systems have not begun to be converted. Systems for exchange-traded stocks completed in March 1998. Systems for Nasdaq-listed stocks to be completed by May 1998. Conducted preliminary inventory. Have not begun process. Only performed quick issue assessment last year. Performed preliminary assessment, will not start until Year 2000 is complete, many systems decimal ready. High level assessment of scope. Previously performed a preliminary assessment, will not start until have specifications. Preliminary assessment of scope. Have not begun process. (continued) Status of effort to convert? Internal systems completed but its clearing firm provides systems that are not ready Processing done by vendor that is not ready. Taken ancillary look but no inventory, budget, or plan. Instead, project is part of ongoing systems development. Have not begun process. Have not begun process. Have not begun process but minimal effort required. Have not begun process. The implementation of decimal pricing within the securities industry may lead to increased processing and communication volumes, because such increases resulted from the reduction in tick size to 1/16ths. All of the exchanges we contacted, along with representatives of organizations that provide information processing for the national market system, indicated that their systems experienced increased processing volumes and operated at greater capacity levels following the reduction in the minimum trading increment from 1/8th to 1/16th. They also indicated that they experienced information processing and communications-related problems during the record trading volumes that were reached in October 1997. According to NYSE officials, that exchange experienced as much as a 40-percent increase in message traffic following the conversion to 1/16ths. They attributed these increases to the doubling of the number of potential price points from 8 to 16, which likely encouraged traders to submit larger numbers of smaller orders in an attempt to achieve the most favorable pricing. Additional message traffic was created by traders who increased the practice of cancelling and resubmitting orders as a way of attempting to ascertain the direction of price movements. Analysis of data provided to us by NYSE shows that the number of one type of message—price quotes—almost doubled, rising 92 percent from 1996 to 1997; 1997 trading volume on the exchange increased just 27 percent from the prior year. On Nasdaq, which also experienced increases because of new SEC rules that introduced more quotes to the Nasdaq system, quotation volumes rose 84 percent as they increased from about 6.6 million messages in January 1997 to over 12.1 million in July 1997. The message volume increased further from there, peaking at over 20 million quotes in October but remaining as high as 16 million in December 1997. In contrast, although Nasdaq’s trading volumes increased 19 percent in total from 1996, Nasdaq began and ended the year trading 14 billion shares, with volume peaking in October at 18 billion. During the record volumes in October 1997, Nasdaq also experienced problems with one of its trading systems. On October 28, 1997, the day when Nasdaq achieved a record trading volume of about 1.4 billion shares, the system used to provide confirmation of trades was operational, but unavailable for user inquiry for several hours. This problem did not stem from a lack of processing capacity but instead was due to a programming restriction that had limited the number of individual buys and sells to a number under 1 million a day. However, this number of transactions was exceeded at about 3 p.m. that afternoon, after which the system continued to process trades but was no longer available to traders for confirmation that their trades had been executed. Nasdaq has since made changes to its confirmation system to address these problems, including raising the programming restriction to 10 million a day. Nasdaq officials also said they have begun expending about $600 million to upgrade their communications network because of the increased processing demands resulting from the conversion to 1/16ths and the increasing trading volumes being experienced in the market. The system that provides quotations for the National Market System also experienced significant information processing and communications problems following the reduction in tick size from 1/8ths to 1/16ths. For example, the communications network used to route price quotations in listed securities among markets and other data vendors experienced queuing problems on 29 occasions between June 1997 and the end of the year, including estimated delays of up to 1 minute at various times during the record trading days in October 1997. Officials responsible for the operation of this system reported that a new, higher capacity network became fully operational on January 2, 1998, and, that, with two exceptions, they expected all data recipients to have migrated to the new network by the end of May 1998. These queuing problems were primarily attributed to significantly increased quotation traffic resulting from a combination of normal growth and two external factors, the new SEC rules in January 1997 and the reduction in the minimum price change increment from 1/8ths to 1/16ths in June 1997. Following the reduction in the price increment, the average daily message traffic of the quotation system increased by about 300,000 messages. The introduction of 1/16ths caused an overall reduction in the processing capacity of the communication network supporting the quotation system, because the use of 1/16ths required a long message format (94 bytes); the use of 1/8ths allowed the use of a short message format (40 bytes). Market participants and others indicated that the implementation of decimal trading in U.S. markets could have a wide range of effects on the markets themselves and on the participants in them. The following presents some of the projected effects that were described to us by regulators, exchange and securities firm officials, and market experts. We also discuss the effects contained in various analyses and studies of U.S. and foreign markets that addressed issues relevant to decimal trading. One of the effects seen by market participants from the implementation of decimal trading in U.S. markets was an increased use of a trading strategy known as “order-jumping.” This occurs when a trader submits an order that improves the price by a small amount and thus obtains priority over any limit orders waiting to be executed. Officials at various exchanges and securities firms expressed concerns that if minimum ticks or spreads decline to pennies, the use of this strategy will become more commonplace, because the risk of loss associated with it would generally be limited to the level of the minimum tick. This is because order-jumping traders can usually reverse their positions quickly by submitting an order to be executed against the very same order they jumped in front of, thus incurring a loss of only the size of the tick or the increment by which they originally achieved priority. The ultimate effect of increased instances of this strategy is not clear. Although the investor whose order interacts with orders that only slightly improve the prevailing price is better off, any investors whose limit orders then go unexecuted or have to be cancelled and resubmitted at prices less advantageous to those investors are worse off. As a result, some of the benefits of decimal trading may be offset by losses to those investors whose limit orders lost priority, and may discourage the use of limit orders over time. Market participants also indicated that a conversion to decimal trading could affect the overall quality of U.S. markets. Numerous exchanges and securities firm officials indicated that decimal trading was likely to reduce the amount of liquidity in the markets, although the results of various analyses produced unclear results as to whether markets that reduced their tick sizes experienced overall declines in market liquidity. The studies we reviewed of U.S. and foreign markets that reduced their tick sizes generally confirmed that fewer shares were available at the best prices after such reductions than before. However, whether overall liquidity was reduced is unclear. According to statistics provided to us by TSE, after it reduced its minimum tick size from 1/8th to $0.05, the number of shares quoted for purchase or sale at the best prices declined over 60 percent for the top 35 stocks and by at least 33 percent for the top 300 stocks. However, TSE officials reported that approximately the same volume of shares is offered as before, but the volume is just spread over more price levels. Another effect that market participants discussed was that decimal trading could reduce the number of securities firms willing to make markets in stocks if it reduces profitability. An official from a medium-sized securities firm told us that his firm has already reduced the number of Nasdaq stocks that it makes markets in as a result of lower profitability. He indicated that investors could be negatively affected as further narrowing spreads will lower the rewards but not the risks to dealers. In his opinion, tick sizes of a penny would dramatically affect the economics of market making, and this could make it more difficult for smaller emerging companies to access the capital markets if the returns to securities firms for making markets in such stocks do not match the risks. At least five of the securities firms we contacted had recently reduced the number of listings in which they made markets, including one large securities firm whose officials told us that they had reduced the number of stocks for which they made markets from about 850 to 550. NASD officials, although not providing exact statistics on the number of market makers, indicated that some firms had reduced the number of stocks for which they made markets, but other firms had increased their market-making activities, and thus no large net impact had resulted. However, a securities firm official noted that when a smaller firm begins making markets in the stocks dropped by a large firm, the costs to investors are not likely to be as low. The impacts of a conversion to decimal trading on overall securities firm profitability were not clear and may vary across the activities of the firms. As noted above, the profits of firms that make markets in Nasdaq stocks would likely be further reduced by any additional narrowing of spreads brought about by decimal trading. Officials at two medium-sized securities firms indicated that their firms’ market-making activities are no longer operated for the purposes of producing profits from such trading. Instead, the activities are maintained as part of providing services to customers that also use these securities firms for corporate finance and other purposes. Decimal trading’s potential impact on profits, however, may not be negative for all dealers. For example, one study of TSE’s conversion to decimals and reduction in tick size found no measurable change in gross trading revenues for member securities firms. According to officials in NASD’s Economic Research Department, this suggests that decimal trading on TSE led to no net benefit to public investors because those investors that submit limit orders have lost at the expense of those that submit orders to buy at the prevailing market price. Furthermore, officials at three organizations told us that securities firms acting as specialists on the floors of NYSE, the American Stock Exchange, and other exchanges may actually experience increased profits if a move to decimal trading brings smaller tick sizes. This is because these firms would be able to participate in more trades without violating rules requiring that customer orders receive priority over the specialist’s own trading. According to information reported by NYSE, specialist firm profits were a record $268 million in 1997, which was an increase of 33 percent from the prior year. Various market participants also projected that decimal trading would increase market volatility. However, the impact of any further tick size reductions arising from decimal trading on overall market volatility is not clear, because a smaller tick size is likely to lead to more frequent, but smaller, price changes. The Investment Technology Group’s study of the U.S. market’s move to 1/16ths found that volatility as measured by price changes from trade to trade had actually declined by almost 20 percent. We did not identify other studies that attempted to show whether the overall level of market volatility has changed over the last year or not. Market participants also indicated that the implementation of decimal trading could affect the functioning of various market rules. Currently, trades conducted for the purpose of selling a stock short are allowed to be executed only if the last trade occurred at a higher price than the one prior (an uptick). With smaller ticks, manipulating the market to ensure that such a higher priced trade occurs before selling short would be less costly and easier to accomplish. Various rules also currently exist that require trades to be executed at the best prices prevailing across markets. For example, officials at one exchange told us that if spreads are as low as a penny, requirements that trigger automatic executions at the best prices will have to be changed to prevent manipulation. This could occur when traders post quotes for, or trade, a small volume of stock in one market to influence the prices in another where they intend to simultaneously trade a larger volume of stock. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the results of its assessment of the securities industry's readiness to trade stock using decimal prices. GAO noted that: (1) in 1997, the House Subcommittee on Finance and Hazardous Materials held hearings on proposed legislation that would have directed the Securities and Exchange Commission (SEC) to require that securities be traded using dollars and cents instead of the traditional fractions within 1 year of enactment of the legislation; (2) after industry representatives indicated that they were committed to converting to decimals, Congress took no further action on the legislation; (3) industry progress since the hearings has generally been limited; (4) officials of most of these organizations estimated that the cost to convert their systems for decimal trading would be much less than the cost for information technology efforts, such as the year 2000 conversion; (5) they also estimated that it would take less than 6 months to convert to decimals, but they did not expect to complete the conversion until after year 2000 changes have been tested and implemented; (6) an industry study showed that the securities industry was dedicating most of its available information technology resources and time to readying its systems for the impending date change in 2000, the introduction of a single currency in Europe in January 1999, and other information technology initiatives; (7) in particular, industry officials said the time required to test and resolve any year 2000 problems leaves little time for conducting the industrywide testing necessary for a conversion to decimal trading; (8) GAO's work reviewing the year 2000 efforts of numerous federal agencies and other entities has generally found that organizations are avoiding the simultaneous implementation and testing of multiple major systems changes to mitigate the risk of inadvertent malfunctions; (9) ensuring that securities industry systems are ready for the year 2000 is too important to the continued functioning of the industry to risk failure by attempting to implement decimal trading before the year 2000 effort is completed; and (10) however, GAO is recommending several actions that are needed to ensure that decimal trading is implemented as soon as possible after January 1, 2000.
While federal agencies are beginning to recognize the need to adapt to climate change, there is a general lack of strategic coordination across agencies, and most efforts to adapt to potential climate change impacts are preliminary. However, some states and localities have begun to make progress on adaptation independently and through partnerships with other entities, such as academic institutions. The subjects of our site visits in the United States—New York City; King County, Washington; and Maryland— have all taken steps to plan for climate change and have begun to implement adaptive measures in sectors such as natural resource management and infrastructure. Their on-the-ground experiences can help inform the federal approach to adaptation, which is now primarily focused on assessing projected climate impacts and exploring adaptation options. In addition, certain nations have taken action to adapt to climate change. Our detailed examination of the United Kingdom provides an example of a country where central and local government entities are working together to address climate change impacts. Although there is no coordinated national approach to adaptation, several federal agencies report that they have begun to take action with current and planned adaptation activities. These activities are largely ad hoc and fall into several categories, including (1) information for decision making, (2) federal land and natural resource management, (3) infrastructure design and operation, (4) public health research, (5) national security preparation, (6) international assistance to developing countries, and (7) governmentwide adaptation strategies. We provide information on selected federal efforts to adapt to climate change, submitted to us by federal agencies, in a supplement to this report (see GAO-10-114SP). Information for decision making: A range of preliminary adaptation- related activities are reported to be under way at different agencies, including efforts to provide relevant climate information to help decision makers plan for future climate impacts. For example, two programs managed by the National Oceanic and Atmospheric Administration (NOAA) help policymakers and managers obtain the information they need to adapt to a changing climate. NOAA’s Regional Integrated Sciences and Assessments (RISA) program supports climate change research to meet the needs of decision makers and policy planners at the national, regional, and local levels. Similarly, NOAA’s Sectoral Applications Research Program is designed to help decision makers in different sectors, such as coastal resource managers, use climate information to respond to and plan for climate variability and change, among other goals. Other agencies—including the National Science Foundation, the Department of the Interior (Interior), the Environmental Protection Agency (EPA), the National Aeronautics and Space Administration (NASA), and the Department of Energy—also manage programs to provide climate information to decision makers. For example, the National Science Foundation supports the scientific research needed to help authorities and the public plan adaptation activities and address any challenges that arise. Similarly, Interior’s newly formed Energy and Climate Change Task Force is working to ensure that climate change impact data collection and analysis are better integrated and disseminated, that data gaps are identified and filled, and that the translation of science into adaptive management techniques is geared to the needs of land, water, and wildlife managers as they develop adaptation strategies in response to climate change-induced impacts on landscapes. Another example of information sharing is EPA’s Climate Ready Estuaries program, which provides a toolkit to coastal communities and participants in its National Estuary Program on how to monitor climate change and where to find data. In addition, NASA’s Applied Sciences Program is working in 31 states and with a number of federal agencies to help officials use NASA’s climate data to make adaptation decisions. For example, NASA forecasts stream temperatures for NOAA managers responsible for managing chinook salmon populations in the Sacramento River and predicts water flow regimes and subsequent fire risk in Yosemite National Park. DOE’s Integrated Assessment Research Program supports research on models and tools for integrated analysis of both the drivers and consequences of climate change. DOE’s supercomputing resources provide the capability to assess impacts and vulnerabilities to temperature change, anticipate extreme events, and predict risk from climate change effects (e.g., water availability) on a regional and local basis to better inform decision makers. Federal land and natural resource management: Several federal agencies have reported beginning to consider measures that would strengthen the resilience of natural resources in the face of climate change. For example, on September 14, 2009, Interior issued an order designed to address the impacts of climate change on the nation’s water, land, and other natural and cultural resources. The Interior order, among other things, designated eight regional Climate Change Response Centers. According to Interior, these centers will synthesize existing climate change impact data and management strategies, help resource managers put them into action on the ground, and engage the public through education initiatives. Similarly, several federal agencies recently released draft reports required by Presidential Executive Order that describe strategies for protecting and restoring the Chesapeake Bay, including addressing the impacts of climate change on the bay. In addition, the U.S. Forest Service reported that it devotes about $9 million to adaptation research and has developed a strategic framework that recognizes the need to enhance the capacity of forests and grasslands to adapt. The Chief of the Forest Service recently testified that dealing with climate change risks and uncertainties will need to be a more prominent part of the Forest Service’s management decision processes. Certain agencies have also identified specific adaptation strategies and tools for natural resource managers. For example, Interior provided a number of adaptation-related policy options for land managers in reports produced for its Climate Change Task Force, a past effort that has since been expanded upon to reflect new priorities. Similarly, a recent U.S. Climate Change Science Program report provided a preliminary review of adaptation options for climate-sensitive ecosystems and resources on federally owned and managed lands. In addition, the Department of Defense’s Legacy Resource Management Program is working with other agencies to develop a guidance manual that will summarize available natural resource vulnerability assessment tools. In some instances, federal agencies have begun to help implement adaptation actions. A recent Congressional Research Service presentation highlighted two case studies on federal lands in which federal agencies assisted with adaptation efforts. The first is a habitat restoration project supported by the U.S. Fish and Wildlife Service (FWS) to adapt to sea level rise in the Albemarle Peninsula, North Carolina. The second focuses on increasing landscape diversity and managing biodiversity in Washington’s Olympic National Forest, the site of a Forest Service Pacific Northwest Research Station. The project involved work with the Federal Highway Administration to protect watersheds and roads. In addition, the Department of Energy reported that it has assessed major water availability issues related to energy production and use, such as electrical generation and fuels production, and identified approaches that could reduce freshwater use in the energy sector, and opportunities for further research and development to address questions that decision makers will need to resolve to effectively manage the energy and water availability issues. Infrastructure design and operation: A number of federal agencies are beginning to recognize that they must account for climate change impacts when building and repairing man-made infrastructure, since such impacts have implications beyond the natural environment. Many adaptation efforts related to infrastructure are at the planning stages to date. For example, the U.S. Army Corps of Engineers’ adaptation initiatives include leading a team of water managers to evaluate how climate change considerations can be incorporated into activities related to water resources. These managers are also participating in an interagency group (Climate Change and Water Working Group) which held workshops in California in spring 2007. At these workshops, water managers from federal (U.S. Geological Survey (USGS), Bureau of Reclamation, NOAA), state, local, and private agencies and organizations recommended more flexible reservoir operations, better use of forecasts, and more monitoring of real-time conditions in the watersheds. A draft report of long-term needs identified by the team was undergoing agency review in August and September 2009. In addition, EPA recently issued a guide entitled Smart Growth for Coastal and Waterfront Communities to help communities address challenges such as potential sea level rise and other climate- related hazards. Within the U.S. Department of Transportation (DOT), the Federal Highway Administration also formed a multidisciplinary internal working group to coordinate infrastructure policy and program activities, specifically to address climate change effects on transportation. Both the U.S. Army Corps of Engineers and DOT are reviewing the impacts of sea level rise on infrastructure. DOT found that a 2-foot sea level rise would affect 64 percent of the Gulf Coast’s port facilities, while a 4-foot rise would affect nearly three-quarters of port facilities. In addition, the Federal Emergency Management Agency (FEMA), part of the U.S. Department of Homeland Security, is currently conducting a study on the impact of climate change on the National Flood Insurance Program, as we recommended in a 2007 GAO report. The Department of Energy is also working to protect critical infrastructure—such as the national laboratories and the Strategic Petroleum Reserve—by using climate impact assessments and developing guidance for management decisions that account for climate change. Public health research: Federal agencies responsible for public health matters are starting to support modeling and research efforts to assess climate change impacts on their programs and issue areas. Currently, the Centers for Disease Control and Prevention’s (CDC) Climate Change program is engaged in a number of adaptation initiatives that address various populations’ vulnerability to the adverse health effects of heat waves. For example, CDC helped develop a Web-based modeling tool to assist local and regional governments to prepare for heat waves and an extreme heat media toolkit for cities. In addition, the National Institutes of Health (NIH) formed a working group on Climate Change and Health, which aims to identify research needs and priorities and involve the biomedical research community in discussions of the health effects of climate change. Recently, NIH developed an initiative called the NIH Challenge Grants in Health and Science Research, which supports research on predictive climate change models and facilitates public health planning. Of particular interest to NIH are studies that quantify the current impacts of climate on a variety of communicable or noncommunicable diseases or studies that project the impacts of different climate and socioeconomic scenarios on health. EPA is also taking steps to ensure that public health needs are met in the context of climate change. For example, EPA helped produce an analysis that examined potential impacts of climate change on human society, opportunities for adaptation, and associated recommendations for addressing data gaps and research goals. In addition, EPA is working with agencies such as CDC, NIH, and NOAA to support the public health communities’ efforts to develop strategies for adapting to climate change. National security preparation: Federal agencies are beginning to study the potential consequences of climate change on national security. For example, the Department of Defense’s ongoing Quadrennial Defense Review is examining the capabilities of the armed forces to respond to the consequences of climate change—in particular, preparedness for natural disasters from extreme weather events, as is required by Section 951 of the National Defense Authorization Act for fiscal year 2008. This act also requires the department to develop guidance for military planners to assess the risk of projected climate change, update defense plans based on these assessments, and develop the capabilities needed to reduce future impacts. In October 2008, the Air Force participated in a Colloquium on National Security Implications of Climate Change sponsored by the U.S. Joint Forces Command. In addition, the Navy recently sponsored a Naval Studies Board study on the National Security Implications of Climate Change on U.S. Naval forces (Navy, Marine Corps, and Coast Guard), to be completed in late 2010. This study is intended to help the Navy develop future robust climate change adaptation strategies. International assistance to developing countries: Some federal agencies are supporting preliminary adaptation planning efforts internationally. For example, the U.S. Agency for International Development (USAID) funds climate change activities related to agriculture, water, forest, and coastal zone management in partner developing countries. To inform such activities, USAID produced two documents, an adaptation guidance manual and a coastal zone adaptation manual, which provide climate change tools and other information to planners in the developing world. In addition, USAID works with NASA to provide developing countries with climate change data to help support adaptation activities. For example, the two agencies use SERVIR, a high-tech regional satellite visualization and monitoring system for Central America, to provide a climate change scenario database, climate change maps indicating impacts on Central America’s biodiversity, a fire and smoke mapping and warning system, red tide alerts, and weather alerts. The U.S. Department of State’s and NOAA’s climate efforts also sustain adaptation initiatives worldwide. NOAA is supporting USAID programs in Asia, Latin America, and Africa by using a science-based approach to enhance governments’ abilities to understand, anticipate, and manage climate risk. In addition, Interior’s International Technical Assistance Program, funded through interagency agreements with USAID and the U.S. Department of State, provides training and technical assistance to developing countries. Governmentwide adaptation strategies: Currently, no single entity is coordinating climate change adaptation efforts across the federal government and there is a general lack of strategic coordination. However, several federal entities are beginning to develop governmentwide strategies to adapt to climate change. For example, the President’s Council on Environmental Quality (CEQ) is leading a new initiative to coordinate the federal response to climate change in conjunction with the Office of Science and Technology Policy, NOAA, and other agencies. Similarly, the U.S. Global Change Research Program (USGCRP), which coordinates and integrates federal research on climate change, has developed a series of “building blocks” that outline options for future climate change work, including science to inform adaptation. The adaptation building block includes support and guidance for federal, regional, and local efforts to prepare for and respond to climate change, including characterizing the need for adaptation and developing, implementing, and evaluating adaptation approaches. Many government authorities at the state and local levels have not yet begun to adapt to climate change. According to a recent NRC report, the response of governments at all levels, businesses and industries, and civil society is only starting, and much is still to be learned about the institutional, technological, and economic shifts that have begun. Some states have not yet started to consider mitigation or adaptation; others have developed plans but have not yet begun to implement them. However, certain governments are beginning to plan for the effects of climate change and to implement climate change adaptation measures. For example, California recently issued a draft climate adaptation strategy, which directs the state government to prepare for rising sea levels, increased wildfires, and other expected changes. A general review of state and local government adaptation planning efforts is available in two recent reports issued by nongovernment research groups. We visited three U.S. sites—New York City; King County, Washington; and the state of Maryland—where government officials have begun to plan for and respond to climate change impacts. The three locations are all addressing climate change adaptation to various extents. New York City is in the planning phases for its citywide efforts, although individual departments have begun to implement specific actions, such as purchasing land in New York City’s watershed to improve the quality of its water supply. King County, Washington has, among other things, completed and begun to implement a comprehensive climate change plan, which includes an adaptation component. Maryland has released the first phase of its adaptation strategy, which is focused on sea level rise and coastal storms, reflecting sectors of immediate concern. Our analysis of these sites suggests three major factors have led these governments to act. First, natural disasters such as floods, heat waves, droughts, or hurricanes raised public awareness of the costs of potential climate change impacts. Second, leaders in all three sites used legislation, executive orders, local ordinances, or action plans to focus attention and resources on climate change adaptation. Finally, each of the governments had access to relevant site-specific information to provide a basis for planning and management efforts. This site-specific information arose from partnerships that decision makers at all three sites formed with local universities and other government and nongovernment entities. The following summaries describe the key factors that motivated these governments to act, the policies and laws that guide adaptation activities at each location, the programs and initiatives that are in place to address climate effects, the sources of site-specific information, and any partnerships that have assisted with adaptation activities. New York City’s adaptation efforts stemmed from a growing recognition of the vulnerability of the city’s infrastructure to natural disasters, such as the severe flooding in 2007 that led to widespread subway closures. The development of PlaNYC—a plan to accommodate a projected population growth of 1 million people, reduce citywide carbon emissions by 30 percent, and make New York City a greener, more sustainable city by 2030—also pushed city officials to think about the future, including the need for climate change adaptation. New York City’s extensive coastline and dense urban infrastructure makes it vulnerable to sea level rise; flooding; and other extreme weather, including heatwaves, which could become more common as a result of climate change. City officials took several steps to formalize a response to climate change. In 2008, the Mayor convened the New York City Panel on Climate Change (NPCC) to provide localized climate change projections and decision tools. The Mayor also invited public agencies and private companies to be part of the New York City Climate Change Adaptation Task Force, a public-private group charged with assessing climate effects on critical infrastructure and developing adaptation strategies to reduce these risks. The Office of Long-Term Planning and Sustainability, established by a local law in 2008, provides oversight of the city’s adaptation efforts, which are part of PlaNYC. In addition to citywide efforts, a number of municipal and regional agencies have begun to address climate change adaptation in their operations. To date, New York City’s adaptation efforts typically have been implemented as facilities are upgraded or as funding becomes available. For example, the city’s Department of Environmental Protection (DEP), which manages water and wastewater infrastructure, has begun to address flood risks to its wastewater treatment facilities. These and other efforts are described in DEP’s 2008 Climate Change Program Assessment and Action Plan. Many of New York City’s wastewater treatment plants, such as Tallman Island (see fig. 1) are vulnerable to sea level rise and flooding from storm surges because they are located in the floodplain next to the waterbodies into which they discharge. In response to this threat, DEP is, in the course of scheduled renovations, raising sensitive electrical equipment, such as pumps and motors, to higher levels to protect them from flood damage. Other municipal departments are implementing climate change adaptation measures as well. For example, the Department of Parks and Recreation launched a pilot project in its Five Borough Technical Services Facility to experiment with different types of green roofs—vegetated plots on rooftops that absorb rainwater and moderate the effects of heatwaves (see fig. 2). According to an official at the Department of Parks and Recreation, the department plans to install green roofs in some of its recreation facilities in the next few years, since these facilities will be replacing their roofs. Green roofs are part of a suite of measures the city is exploring to control stormwater at the source (the location where the rain falls), rather than pipe it elsewhere. This can help reduce the need for more infrastructure investments in preparation for more intense rainstorms— investments that can be very costly and that are not always feasible in the space available under the city streets. New York City’s adaptation efforts have benefited from officials’ access to site-specific information, starting with the publication of a 2001 report for USGCRP, which provided a scientific assessment of climate change effects in the New York City metropolitan region. More recently, the city, through the financial support of the Rockefeller Foundation, created NPCC. According to its co-chairs, NPCC is charged with completing several decision-support documents, which will provide decision makers with information about local climate effects. In addition, the Mayor convened the New York City Climate Change Adaptation Task Force to prepare a risk-based assessment of how climate change would affect the communication, energy, water and waste, transportation, and policy sectors, as well as the urban ecosystem and parks, and prioritize potential response strategies. Members of the task force, several of whom represent private industries, explained that they agreed to participate in the task force because the Mayor made this issue a priority. They noted that events such as Hurricane Katrina in 2005; the power outage in August 2003, which affected New York City as well as other locations in the United States and Canada; and the 2007 subway flooding raised their awareness about the effects of climate change on their operations. New York City partners with other state and local governments to share knowledge and implement climate change adaptation efforts. It is a charter member of the C40, a coalition of large cities around the world committed to addressing climate change. City agencies also share information with counterparts in other locations about specific concerns. For example, DEP shares information about addressing water-related climate change effects with the state of California and the Water Utility Climate Alliance, a national coalition of water and wastewater utilities. DEP coordinates with other state and local governments to address climate change effects on its watershed, which is located outside of city limits. Similarly, transportation agencies that serve New York City, such as the Metropolitan Transit Authority and New Jersey Transit, cross local and state boundaries and require coordination on a regional scale, which New York City addresses through its multijurisdictional task force. City officials and members of NPCC stated that a coherent federal response would further facilitate the development of common objectives across local and state jurisdictions. According to officials from the King County Department of Natural Resources and Parks (DNRP), the county took steps to adapt to climate change because its leadership was highly aware of climate impacts on the county and championed the need to take action. The county commissioned an internal study in 2005 that included each department’s projection of its operations in 2050, which focused attention on the need to prepare for future climate changes. The county also sponsored a conference in 2005 that brought together scientists, local and state officials, the private sector, and the public to discuss the impacts of climate change. This conference served to educate the public and officials and spur action. Officials from DNRP noted that recent weather events increased the urgency of certain adaptive actions. For example, in November 2006, the county experienced severe winter storms that caused a series of levees to crack. The levees had long needed repair, but the storm damage helped increase support for the establishment of a countywide flood control zone district, funded by a dedicated property tax. The flood control zone district will use the funds, in part, to upgrade flood protection facilities, which will increase the county’s resilience to future flooding. In addition to more severe winter storms, the county expects that climate change will lead to sea level rise; reduced snowpack; and summertime extreme weather such as heat waves and drought, which can lead to power shortages because hydropower is an important source of power in the region. The county’s first formal step toward adaptation was a climate change plan developed in 2007. The county executive also issued several executive orders that call for, among other things, the evaluation of climate impacts in the State Environmental Policy Act reviews conducted by county departments and the consideration of global warming adaptation in county operations, such as transportation, waste and wastewater infrastructure, and land use planning. For example, King County officials told us that during the construction of the Brightwater wastewater treatment plant, DNRP’s Wastewater Treatment Division added a pipeline that could convey approximately 7 million gallons per day of reclaimed water to industrial and agricultural users upon completion in 2011. They also said that additional reclaimed water could be made available in the future as the need arises. The division is also addressing the effects of sea level rise by, for example, increasing the elevation of vulnerable facilities during design and installing flaps to prevent backflow into its pipelines. Additionally, in 2008, the county incorporated consideration of climate change into the revision of its Comprehensive Plan, which guides land use decisions throughout the county. King County officials told us that each county department convened internal teams that identify climate change initiatives and report to the King County Executive Action Group on Climate Change on their progress. For example, the county’s Department of Transportation Road Services Division started a Climate Change Team in 2008, which identified several initiatives in response to projections for more intense storms, including investigating new approaches to stormwater treatment. Specifically, the Road Services Division is piloting a roadside rain garden, which captures and filters rainwater using vegetation and certain types of soil, to determine whether more of such installations could improve the onsite management of stormwater runoff, as compared to a traditional engineering approach, which would pipe the water to a pond or holding vault and then discharge it to an offsite waterbody (see fig. 3). Alongside the rain garden, a permeable concrete sidewalk will absorb additional rain that would normally flow off a traditional impervious sidewalk into adjacent property. The rain garden and permeable sidewalk are considered examples of “low-impact development,” which are expected to help the county adapt to increased rainfall by reducing peak surface water flows from road surfaces by about 33 percent. The Road Services Division is also implementing other measures that could improve its response to storms, such as installing larger culverts, improving its ability to detect hazardous road conditions (for example, due to flooding), and communicating those conditions to maintenance staff and the general public. County officials receive information on climate change effects from a number of sources. The University of Washington Climate Impacts Group (CIG), funded by NOAA’s RISA program, has had a long-standing relationship with county officials and works closely with them to provide regionally specific climate change data and modeling, such as a 2009 assessment of climate impacts in Washington, as well as decision-making tools. For example, the CIG Web site provides a Climate Change Streamflow Scenario Tool, which allows water planners in the Columbia River basin to compare historical records with climate change scenarios. Similarly, according to its faculty, the Washington State University Extension Office works with the county and CIG to provide climate change information to the agricultural and forestry sectors, both of which will be increasingly affected by insect infestation due to increases in temperatures. The university’s Extension Office also provides direct technical assistance to landowners affected by these impacts. King County officials, according to the director of DNRP, also share information about climate change adaptation with other localities through several partnership efforts, including the Center for Clean Air Policy Urban Leaders Adaptation Initiative. The Secretary of the Maryland Department of Natural Resources (DNR) told us that Maryland began to work on climate change adaptation because of the state’s vulnerability to coastal flooding due to sea level rise and severe storms. The Maryland coastline is particularly vulnerable due to a combination of global sea level rise and local land subsidence, or sinking, among other factors. It has already experienced a sea level rise of about 1 foot in the last 100 years, which led to the disappearance of 13 Chesapeake Bay islands. According to a recent state report, a 2- to 3-foot sea level rise could submerge thousands of acres of tidal wetlands; low- lying lands; and Smith Island in the Chesapeake Bay. These ongoing concerns, along with widespread flooding caused by Hurricane Isabel in 2003, have increased awareness of climate change effects in the state. Maryland officials have taken a number of steps to formalize their response to climate change effects. An executive order in 2007 established the Maryland Commission on Climate Change, which released the Maryland Climate Action Plan in 2008. As part of this effort, DNR chaired an Adaptation and Response Working Group, which issued a report on sea level rise and coastal storms. The 2008 Maryland Climate Action Plan calls for future adaptation strategy development to cover other sectors such as agriculture and human health. Maryland also enacted several legislative measures that address coastal concerns, including the Living Shoreline Protection Act of 2008, which generally requires the use of nonstructural shoreline stabilization measures instead of “hard” structures such as bulkheads and retaining walls (see fig. 4). According to a Maryland official, as sea level rises there will be a greater need for shore protection. Living shorelines provide such protection, while also maintaining coastal processes and providing aquatic habitat. The Chesapeake and Atlantic Coastal Bays Critical Area Protection law was also amended to, among other things, require the state to update the maps used to determine the boundary of the critical areas at least once every 12 years. Previously, the critical areas were based on a map drawn in 1972 that did not reflect changes caused by sea level rise or other coastal erosion processes. According to officials from DNR, the department is modifying several existing programs to ensure that the state is taking the effects of climate change into account. For example, an official from DNR told us that it is incorporating climate change into its ranking criteria for state land conservation. Specifically, this official said that DNR plans to prioritize coastal habitat for potential acquisition according to its suitability for marsh migration, among other factors. Additionally, Maryland is providing guidance to coastal counties to assist them with incorporating the effects of climate change into their planning documents. For example, DNR funded guidance documents to three coastal counties, Dorchester, Somerset, and Worcester Counties, on how to address sea level rise and other coastal hazards in their local ordinances and planning efforts. Furthermore, in spring 2009, DNR officials participated in a public Somerset County sea level rise workshop designed to raise the awareness of county residents. Officials discussed what sea level rise projections could mean to the county, including the inundation of some of its coastal infrastructure and salt marsh habitat (see fig. 5), and described some of the state initiatives to address these effects. Finally, officials with the DNR Monitoring and Non-Tidal Assessment Division told us they are considering expanding their monitoring of sentinel sites—pristine streams where changing conditions can help detect localized impacts of climate change. Maryland draws on local universities, federal agencies, and others to access information relevant to climate change. For example, in 2008, scientists from the University of Maryland chaired and participated in the Scientific and Technical Working Group of the Maryland Commission on Climate Change. Faculty from the University of Maryland also provide technical information to the state government and legislature on an ongoing basis. Maryland receives grants and additional technical assistance from the federal government and collaborates with federal agencies and local universities to collect and disseminate data relevant to climate change adaptation. Specifically, Maryland used local, state, and federal resources to map its coastline using Light Detection and Ranging technology and has made this information, as well as a number of tools that can be used by the public and decision makers, readily available in the Maryland Shorelines Online Web site. For example, an interactive mapping application called Shoreline Changes Online allows users to access historic shoreline data to determine erosion trends. Limited adaptation efforts are also taking root in other countries around the world. In 2007, the Intergovernmental Panel on Climate Change’s Fourth Assessment Report found that some adaptation measures are being implemented in both developing and developed countries, but that many of these measures are in the preliminary stages. As in the case of the state and local efforts described earlier, some of these adaptation efforts have been triggered by the recognition that current weather extremes and seasonal changes will become more frequent in the future. For example, recognizing the hazards of rising temperatures, efforts are under way in Nepal to drain the expanding Tsho Rolpa glacial lake to reduce flood risk. Similarly, in response to reduced snow cover and glacial retreat, the winter tourism industry in the European Alpine region has implemented a number of measures, such as building reservoirs to support artificial snowmaking. A number of countries have begun to assess their vulnerability to climate change impacts and formulate national responses. For example, Canada issued a report in 2008 that discusses the current and future risks and opportunities that climate change presents, primarily from a regional perspective. Australia recently issued guidance to local governments about expected climate change projections, impacts, and potential responses. In addition, under the United Nations Framework Convention on Climate Change, least-developed countries can receive funding to develop National Adaptation Programmes of Action (NAPA)—38 NAPAs had been completed as of October 2008. The NAPAs communicate the country’s priority activities addressing the urgent and immediate needs and concerns relating to adaptation to the adverse effects of climate change. In order to provide an in-depth example of a climate change adaptation effort outside of the United States, we selected the United Kingdom as a case study to better understand some of the actions that government officials can take to adapt to climate change. We selected the United Kingdom because it has initiated a coordinated climate change adaptation response at the national, regional, and local levels. Over the past decade, the issuance of prominent reports and the fallout from major weather events created awareness among government officials of the need for the United Kingdom to adapt to inevitable changes to the nation’s climate. For example, in 2002, the London Climate Change Partnership, a stakeholder-led group coordinated by the Greater London Authority, issued a report called London’s Warming, which detailed the expected impacts of climate change and the key challenges to addressing it. In addition, the 2006 Stern Review of the economics of climate change helped accelerate the national government’s efforts to adapt. These and other reports show that the United Kingdom could experience a variety of climate change effects in the future, including dry summers, wet winters, coastal erosion, and sea level rise. In fact, the United Kingdom is already experiencing severe weather events. For example, in 2006, a dry period brought about water restrictions in London. The following year, large-scale flooding in the United Kingdom highlighted the need to respond to climate change and led to the Pitt Review, which examined resilience to flooding in the United Kingdom. In addition, the nation’s insurance sector, which currently offers comprehensive flood insurance coverage, has raised concerns about the growing flood risk and asked for government action. In response to these concerns, the United Kingdom enacted national climate change legislation in 2008. The law requires the British Secretary of State for Environment, Food and Rural Affairs to report periodically to Parliament with a risk assessment of the current and predicted impacts of climate change and to propose programs and policies for climate change adaptation. The law also authorizes the national government to require certain public authorities, such as water companies, to report on their assessment of the current and predicted impact of climate change in relation to the authority’s functions as well as their proposals and policies for adapting to climate change. According to Department for Environment, Food and Rural Affairs (DEFRA) officials, the government department responsible for leading action on adaptation, an independent expert subcommittee of the Committee on Climate Change is to provide technical advice and oversee these efforts. The United Kingdom is also working with the European Union to incorporate climate change into its decisions and policies. In the United Kingdom, different levels of government report working together to ensure that climate change considerations are incorporated into decision making. For example, the Government Office for London chairs the national government’s Local and Regional Adaptation Partnership Board, which aims to facilitate climate change adaptation at local and regional levels by highlighting best practices and encouraging information sharing among local and regional officials. According to DEFRA officials, the primary role of the national government is to provide information, raise awareness, and encourage others to take action, not dictate how to adapt. In response to the United Kingdom’s 2008 Climate Change Act, DEFRA officials said they are preparing a national risk assessment and conducting economic analyses to quantify the costs and benefits of adaptive actions. DEFRA officials said that these steps are to assist adaptation efforts undertaken by the national government, local government officials, and the private sector. Adaptation activities are driven in part by the use of national performance measures, which affect local funding, and national government programs, according to DEFRA officials. The national government recently introduced a national adaptation indicator, which measures how well local governments are adapting to climate change risk. Performance measured by this and other indicators is the basis for national grants to local governments. Individual government agencies are also developing and implementing their own plans to address climate change effects. For example, the Environment Agency, which is responsible for environmental protection in England and Wales, as well as flood defense and water resource management, has initiatives in place to reduce water use to increase resilience to drought. It is also addressing flood risk, most notably with the Thames Barrier, a series of flood gates that protect London from North Sea storms (see fig. 6). The United Kingdom’s climate change initiatives are built around locally relevant information generated centrally by two primary sources. The United Kingdom Climate Impacts Programme (UKCIP), a primarily publicly funded program housed in Oxford University, generates stakeholder-centered climate change decision-making tools and facilitates responses to climate change. UKCIP works with national, regional, and local users of climate data to increase awareness and encourage actions. For example, Hampshire County, in southern England, used climate scenarios generated by UKCIP to complete a test of the county’s sensitivity to weather and other emergency scenarios. The Met Office Hadley Centre, a government-funded climate research center, generates climate science information and develops models. According to a United Kingdom official, the Met Office Hadley Centre generated the bulk of the science for the UK Climate Projections 2009, while UKCIP, among others, provided user guidance and training to facilitate the use of these data. Regional and international partnerships have also played a significant role in providing guidance to further climate change adaptation efforts in the United Kingdom. For example, Government Office for London officials told us that the Three Regions Climate Change Group (which includes the East of England, South East of England, and London) has set up a group to promote retrofitting of existing homes. The group produced a report, which provided a checklist for developers, case studies, a good practices guide, and a breakdown of the costs involved. On an international scale, Greater London Authority officials stated that they are working with cities such as Tokyo, Toronto, and New York City to share knowledge about climate change adaptation. In addition, a Hampshire County Council official told us about the county’s participation in the European Spatial Planning—Adapting to Climate Events project, which provided policy guidance and decision-making tools to governments from several countries on incorporating adaptation into planning decisions. The challenges faced by federal, state, and local officials in their efforts to adapt fell into three categories, based on our analysis of questionnaire results, site visits, and available studies. First, available attention and resources are focused on more immediate needs, making it difficult for adaptation efforts to compete for limited funds. Second, insufficient site- specific data, such as local projections of expected changes, makes it hard to predict the impacts of climate change, and thus hard for officials to justify the current costs of adaptation efforts for potentially less certain future benefits. Third, adaptation efforts are constrained by a lack of clear roles and responsibilities among federal, state, and local agencies. Competing priorities limit the ability of officials to respond to the impacts of climate change, based on our analysis of Web-based questionnaire results, site visits, and available studies. We asked federal, state, and local officials to rate specific challenges related to awareness and priorities as part of our questionnaire. Table 2 presents the percentage of federal, state, and local respondents who rated these challenges as very or extremely challenging in our questionnaire. Appendix III includes a more detailed summary of federal, state, and local officials’ responses to the questionnaire. The highest rated challenge identified by respondents was an overall lack of funding for adaptation efforts. This problem is coupled with the competing priorities of more immediate concerns. Lack of funding: The government officials who responded to our questionnaire identified the lack of funding for adaptation efforts as both the top challenge related to awareness and priorities and the top overall challenge explored in our questionnaire. Several respondents wrote that lack of funding limited their ability to identify and respond to the impacts of climate change, with one noting, for example, that “we have the tools, but we just need the funding and leadership to act.” A state official similarly said that “we need a large and dedicated funding source for adaptation. It’s going to take 5 to 10 years of funding to get a body of information that will help planning in the long run. We need to start doing that planning and research now.” Several studies also suggested that it will be difficult, if not impossible, for any agency to approach the tasks associated with adaptation without permanent, dedicated funding. For example, a recent federal report on adaptation options for climate- sensitive ecosystems and resources stated that a lack of sufficient resources may pose a significant barrier to adaptation efforts. Officials also cited lack of funding as a challenge during our site visits. For example, King County officials said that they do not have resources budgeted directly for addressing climate change. Instead, the county tries to meet its adaptation goals by shifting staff and reprioritizing goals. The county officials said it was difficult to take action without dedicated funding because some adaptation options are perceived to be very expensive, and that if available funding cannot support the consideration of adaptation options then the old ways of doing business would remain the norm. Competing priorities: Respondents’ concerns over an overall lack of funding for adaptation efforts was further substantiated, and perhaps explained, by their ratings of challenges related to the priority of adaptation relative to other concerns. Specifically, about 71 percent (128 of 180) of the respondents rated the challenge “non-adaptation activities are higher priorities” as very or extremely challenging. The responses of federal, state, and local respondents differed for this challenge. Specifically, about 79 percent (37 of 47) of state officials and nearly 76 percent (44 of 58) of local officials who responded to the question rated “non-adaptation activities are higher priorities” as very or extremely challenging, compared with about 61 percent (44 of 72) of the responding federal officials. Several federal, state, and local officials noted in their narrative comments in our questionnaire how difficult it is to convince managers of the need to plan for long-term adaptation when they are responsible for more urgent concerns that have short-term decision-making time frames. One federal official explained that “it all comes down to resource prioritization. Election and budget cycles complicate long-term planning such as adaptation will require. Without clear top-down leadership setting this as a priority, projects with benefits beyond the budget cycle tend to get raided to pay current-year bills to deliver results in this political cycle.” Several other officials who responded to our questionnaire expressed similar sentiments. A recent NRC report similarly concluded that, in some cases, decision makers do not prioritize adaptation because they do not recognize the link to climate change in the day-to-day decisions that they make. Our August 2007 report on climate change on federal lands shows how climate change impacts compete for the attention of decision makers with more immediate priorities. This report found that resource management agencies did not, at that time, make climate change a priority, nor did their agencies’ strategic plans specifically address climate change. Resource managers explained that they had a wide range of responsibilities and that without their management designating climate change as a priority, they focused first on near-term priorities. Our questionnaire results and site visits demonstrate that public awareness can play an important role in the prioritization of adaptation efforts. About 61 percent (113 of 184) of the officials who responded to our questionnaire rated “lack of public awareness or knowledge of adaptation” as either very or extremely challenging. The need to adapt to climate change is a complicated issue to communicate with the public because the impacts vary by location and may occur well into the future. For example, officials in Maryland told us that, while the public may be aware that climate change will affect the polar ice cap, people do not realize that it will also affect Maryland. New York City officials said that it is easier to engage the public once climate change effects are translated into specific concerns, such as subway flooding. They said the term climate change adaptation can seem too abstract to the public. As summarized in table 3 and corroborated by our site visits and available studies, a lack of site-specific information—including information about the future benefits of adaptation activities—limits the ability of officials to respond to the impacts of climate change. See appendix III for a more detailed summary of federal, state, and local officials’ responses to our Web-based questionnaire. These challenges generally fit into two main categories: (1) the difficulty in justifying the current costs of adaptation with limited information about future benefits and (2) translating climate data—such as projected temperature and precipitation changes—into information that officials need to make decisions. Justifying current costs with limited information about future benefits: Respondents rated “justifying the current costs of adaptation efforts for potentially less certain future benefits” as the greatest challenge related to information and as the second greatest of all the challenges we asked about. They rated the “size and complexity of future climate change impacts” as the second greatest challenge related to information. These concerns are not new. In fact, a 1993 report on climate change adaptation by the Congressional Office of Technology Assessment posed the following question within its overall discussion of the issue: “why adopt a policy today to adapt to a climate change effect that may not occur, for which there is significant uncertainty about impacts, and for which benefits of the anticipatory measure may not be seen for decades?” Several officials shared similar reactions in written responses to our questionnaire. For example, one local official asked, “How do we justify added expenses in a period of limited resources when the benefits are not clear?” While the costs of policies to mitigate and adapt to climate change may be considerable, it is difficult to estimate the costs of inaction—costs which could be much greater, according to a recent NRC report. This report cites the long time horizon associated with climate change, coupled with deep uncertainties associated with forecasts and projections, among other issues, as aspects of climate change that are challenging for decision making. Several officials who responded to our questionnaire noted similar concerns. For example, one federal official stated that decision makers needed to confront “the reality that the future will not echo the past and that we will forever be managing under future uncertainty.” Of particular importance in adaptation are planning decisions involving physical infrastructure projects, which require large capital investments and which, by virtue of their anticipated lifespan, will have to be resilient to changes in climate for many decades. The long lead time and long life of large infrastructure investments require such decisions to be made well before climate change effects are discernable. For example, the United Kingdom Environment Agency’s Thames 2100 Plan, which was released for consultation in April 2009, maps out necessary maintenance and operations needs for the Thames Barrier until 2070, at which point major changes will be required. Since constructing flood gates is a long-term process (the current barrier was finished 30 years after officials first identified a need for it), officials said they need the information now, even if the threat will not materialize until later. Translating climate data into site-specific information: The process of providing useful information to officials making decisions about adaptation can be summarized in several steps. First, data from global-scale models must be “downscaled” to provide climate information at a geographic scale relevant to decision makers. About 74 percent (133 of 179) of the officials who responded to our questionnaire rated “availability of climate information at relevant scale (i.e., downscaled regional and local information)” as very or extremely challenging. In addition, according to one federal respondent, “until we better understand what the impacts of climate change will be at spatial (and temporal) scales below what the General Circulation Models predict for the global scale, it will be difficult to identify specific adaptation strategies that respond to specific impacts.” Our August 2007 report on climate change on federal lands demonstrated that resource managers did not have sufficient site-specific information to plan for and manage the effects of climate change on the federal resources they oversee. In particular, the managers lacked computational models for local projections of expected changes. For example, at that time, officials at the Florida Keys National Marine Sanctuary said that they lacked adequate modeling and scientific information to enable managers to predict change on a small scale, such as that occurring within the sanctuary. Without such models, the managers’ options were limited to reacting to already-observed effects. Second, climate information must be translated into impacts at the local level, such as increased stream flow. About 75 percent (136 of 182) of the respondents rated “translating available climate information (e.g., projected temperature, precipitation) into impacts at the local level (e.g., increased stream flow)” as very or extremely challenging. Some respondents and officials interviewed during our site visits said that it is challenging to link predicted temperature and precipitation changes to specific impacts. For example, one federal respondent said that “we often lack fundamental information on how ecological systems/species respond to non-climate change related anthropogenic stresses, let alone how they will respond to climate change.” Such predictions may not easily or directly match the information needs that could inform management decisions. For example, Maryland officials told us they do not have information linking climate model information, such as temperature and precipitation changes, to biological impacts, such as changes to tidal marshes. Similarly, King County officials said they are not sure how to translate climate change information into effects on salmon recovery efforts. Specifically, they said that there is incomplete information about how climate change may affect stream temperatures, stream flows, and other factors important to salmon recovery. However, multiple respondents said that it was not necessary to have specific, detailed, downscaled modeling to manage for adaptation in the short term. For example, one federal respondent said that although modeling projections will get better over time, there will always be elements of uncertainty in how systems and species will react to climate change. Interestingly, federal, state, and local respondents perceived the challenges posed by site-specific information needs differently. About 85 percent (60 of 71) of the federal officials that responded to the question rated “translating available climate information into impacts at the local level” as very or extremely challenging, compared to around 75 percent (35 of 47) of the state officials and around 66 percent (40 of 59) of the local officials who responded. Third, local impacts must be translated into costs and benefits, since this information is required for many decision-making processes. Almost 70 percent (126 of 180) of the respondents to our questionnaire rated “understanding the costs and benefits of adaptation efforts” as very or extremely challenging. As noted by one local government respondent, it is important to understand the costs and benefits of adaptation efforts so they can be evaluated relative to other priorities. In addition, a federal respondent said that tradeoffs between costs and benefits are an important component to making decisions under uncertainty. Fourth, decision makers need baseline monitoring data to evaluate adaptation actions over time. Nearly 62 percent (113 of 181) of the respondents to our questionnaire rated the “lack of baseline monitoring data to enable evaluation of adaptation actions (i.e., inability to detect change)” as very or extremely challenging, one of the lower ratings for this category of challenges. As summarized by a recent NRC report, officials will need site-specific and relevant baselines of environmental, social, and economic information against which past and current decisions can be monitored, assessed, and changed. Future decision-making success will be judged on how quickly and effectively numerous ongoing decisions can be adjusted to changing circumstances. For example, according to Maryland officials, the state lacks baseline data on certain key Chesapeake Bay species such as blue crab and striped bass, so it will be difficult to determine how climate change will affect them or if proposed adaptation measures were successful. Similarly, our August 2007 report on climate change on federal lands showed that resource managers generally lacked detailed inventories and monitoring systems to provide them with an adequate baseline understanding of the plant and animal species that existed on the resources they manage. Without such information, it was difficult for managers to determine whether observed changes were within the normal range of variability. A lack of clear roles and responsibilities for addressing adaptation across all levels of government limits adaptation efforts, based on our analysis of federal, state, and local officials’ responses to our Web-based questionnaire, site visits, and relevant studies. Table 4 presents respondents’ views on how challenging different aspects of the structure and operation of the federal government are to adaptation efforts. See appendix III for a more detailed summary of federal, state, and local officials’ responses to our Web-based questionnaire. These challenges are summarized in two general categories: (1) lack of clear roles and responsibilities and (2) federal activities that constrain adaptation efforts. Lack of clear roles and responsibilities: “A lack of clear roles and responsibilities for addressing adaptation across all levels of government (i.e., adaptation is everyone’s problem but nobody’s direct responsibility)” was identified by respondents as the greatest challenge related to the structure and operation of the federal government. Several respondents elaborated on their rating. For example, according to one state official, “there is a power struggle between agencies and levels of government rather than a lack of clear roles. Everyone wants to take the lead rather than working together in a collaborative and cohesive way.” One local official said he “can’t emphasize enough how the lack of coordination between agencies at the federal (and state) level severely complicates our abilities at the local level.” Several respondents also noted that there is no element within the federal government charged with facilitating a collaborative response. Our questionnaire results show that local and state respondents consider the lack of clear roles and responsibilities to be a greater challenge than do federal respondents. Specifically, about 80 percent (48 of 60) of local officials and about 67 percent (31 of 46) of state officials who responded to the question rated the lack of clear roles and responsibilities as either very or extremely challenging, compared with about 61 percent (42 of 69) of the responding federal officials. This lack of coordination and “institutional fragmentation” are serious challenges to adaptation efforts because clear roles are necessary for a large-scale response to climate change. As stated by one local government respondent, agencies “have numerous, overlapping jurisdictions and authorities, many of which have different (sometimes competing) mandates. If left to plan independently, they’ll either do no adaptation planning or, if they do, likely come up with very different (and potentially conflicting) adaptation priorities.” A recent NRC report comes to similar conclusions, noting that collaboration among agencies can be impeded by different enabling laws, opposing missions, or incompatible budgetary rules. Such barriers—whether formalized or implicit—can lead to disconnects, conflicts, and turf battles rather than productive cooperation, according to this report. About 52 percent (92 of 176) of the respondents to our questionnaire rated the “lack of federal guidance or policies on how to make decisions related to adaptation” as very or extremely challenging. Their views echo our August 2007 report, which noted that federal resource managers were constrained by limited guidance about whether or how to address climate change and, therefore, were uncertain about what actions, if any, they should take. In general, resource managers from all of the agencies we reviewed for that report said that they needed specific guidance to incorporate climate change into their management actions and planning efforts. For example, officials from several federal land and water resource management agencies said that guidance would help resolve differences in their agencies about how to interpret broad resource management authorities with respect to climate change and give them an imperative to take action. A recent federal report on adaptation options for climate-sensitive ecosystems and resources reinforced these points. It noted that, as resource managers become aware of climate change and the challenges it poses, a major limitation is lack of guidance on what steps to take, especially guidance that is commensurate with agency cultures and the practical experiences that managers have accumulated from years of dealing with other stresses, such as droughts and fires. Our questionnaire results indicate that local government respondents consider the lack of federal guidance to be a greater challenge than state or federal respondents. Specifically, about 65 percent (39 of 60) of local officials who responded to the question rated the “lack of federal guidance or policies on how to make decisions related to adaptation” as either very or extremely challenging, compared to about 41 percent (19 of 46) of state officials and nearly 49 percent (33 of 67) of the federal officials that responded. Federal activities that constrain adaptation efforts: Another challenge related to the structure and operation of the federal government is the existence of federal policies, programs, or practices that hinder adaptation efforts. While not the top challenge in the category, “existing federal policies, programs, or practices that hinder adaptation efforts”—which was rated as very or extremely challenging by about 43 percent (64 of 150) of the officials who responded to our questionnaire—is an important issue, as indicated by a wealth of related written comments submitted by respondents, comments from officials at our site visits, and a number of related studies. Our work shows how, at least in some instances, federal programs may limit adaptation efforts. Our 2007 climate change-related report on FEMA’s National Flood Insurance Program and the U.S. Department of Agriculture’s (USDA) Federal Crop Insurance Corporation, which insures crops against drought or other weather disasters, contrasted the experience of public and private insurers. We found that many major private insurers were incorporating some near-term elements of climate change into their risk management practices. In addition, we found that some private insurers were approaching climate change at a strategic level by publishing reports outlining the potential industrywide impacts and strategies to proactively address the issue. In contrast, our report noted that the agencies responsible for the nation’s key federal insurance programs had done little to develop the kind of information needed to understand their programs’ long-term exposure to climate change for a variety of reasons. As a FEMA official explained in that report, the National Flood Insurance Program is designed to assess and insure against current—not future—risks. Unlike the private sector, neither this program nor the Federal Crop Insurance Corporation had analyzed the potential impacts of an increase in the frequency or severity of weather-related events on their operations. At our site visit, Maryland officials told us that FEMA’s outdated delineation of floodplains, as well as its failure to consider changes in floodplain boundaries due to sea level rise, is allowing development in areas that are vulnerable to sea level rise in Maryland because local governments rely on its maps for planning purposes. Both FEMA and USDA have taken recent steps to address these concerns and have committed to study these issues further and report to Congress, with USDA estimating completion by December 31, 2009. Officials who responded to our questionnaire also identified several federal laws that hinder climate change efforts. A state official noted that many federal laws such as the Endangered Species Act, the Clean Water Act, and the Clean Air Act were passed before recognition of the effects of climate change. A federal official stated that federal environmental laws may need to be amended to provide greater authority for agencies to practice adaptive management. The official noted that federal laws promoting development may also warrant re-examination to the extent they provide incentives that run counter to prudent land and resource planning in the climate change context. One federal respondent stated that federal laws, regulations, and policies assume that long-term climate is stable and that species, ecosystems, and water resources can be managed to maintain the status quo or to restore them to prior conditions. This official observed that these objectives may no longer be achievable as climate change intensifies in the coming decades. A state official similarly noted that because of the effects of climate change, maintenance of the resource management status quo in any given area may no longer be possible. Part of the problem may lie in the inherent tension between the order of legal frameworks and the relative chaos of natural systems, which one legal commentator explained as follows: “Lawyers like rules. We like enforceable rules. We want our rules to be optimal, tidy, and timeless…. Collaborative ecosystem management, by contrast, is often messy, elaborate, cumbersome, ad hoc, and defiantly unconventional.” Several officials who responded to our questionnaire expressed similar concerns related to climate change adaptation. For example, one federal official stated that existing laws “were built for the status quo, but we now must re-engineer the entire legal framework to deal with the ongoing, perpetual, and rapid change. A systems view is essential in order to manage change optimally.” Potential federal actions for addressing challenges to adaptation efforts fall into three areas, based on our analysis of questionnaire results, site visits, and available studies: (1) federal training and education initiatives that could increase awareness among government officials and the public about the impacts of climate change and available adaptation strategies; (2) actions to provide and interpret site-specific information that could help officials understand the impacts of climate change at a scale that would enable them to respond; and (3) steps Congress and federal agencies could take to encourage adaptation by setting priorities and re- evaluating programs that hinder adaptation efforts. Federal training and education initiatives would assist adaptation efforts, based on our analysis of our Web-based questionnaire, site visits, and relevant studies. Table 5 presents potential federal government actions related to awareness and priorities as rated by federal, state, and local officials who responded to our questionnaire. See appendix III for a more detailed summary of federal, state, and local officials’ responses to our Web-based questionnaire. We present these potential federal actions in three general categories: (1) training programs that could help government officials to develop more effective and better coordinated adaptation programs; (2) development of specific policy options for government officials; and (3) public education efforts to increase the public’s understanding of climate change issues and the need to begin investing in preparatory measures. Training for government officials: Training efforts could help officials collaborate and share insights for developing and implementing adaptation initiatives. Respondents rated the “development of regional or local educational workshops for relevant officials that are tailored to their responsibilities” as the most useful potential federal government action related to awareness and priorities. According to one federal official, “it is clear that training and communication may be the two biggest hurdles we face. We have the capabilities to adapt and to forecast scenarios of change and potential impacts of alternative adaptation options. We lack the will to exercise this capacity. The lack of that will is traceable to ignorance, sometimes willfully maintained.” This respondent calls for “a massive educational process…designed and implemented all the way from the top- end strategic thinkers down to the ranks of tactical implementers of change and adaptation options.” Training on how to make decisions with uncertainty would be particularly useful for frontline actors, such as city and county governments. For example, Maryland held an interactive summit on building “coast-smart communities,” which brought together federal, state, and local officials involved with planning decisions in coastal areas. The summit employed role-playing to introduce participants to critical issues faced by coastal communities as a result of climate change. In addition, New York City DEP officials noted that their membership in the Water Utility Climate Alliance provided them with an important way to exchange information with water managers from across the nation. Several respondents said that the federal government could play an important role in training officials at all levels of government. For example, one state official said that “because so many of us are only in the early stages of becoming aware of this issue, I think that a well organized training where many people would be learning the same thing and in the same way is important.” However, a different state official questioned whether federal training would be effective for state and local officials, explaining that federal officials may not have enough knowledge about specific state and local challenges. The official thought that a better option may be to hold regional conferences with diverse groups of federal, state, and local officials so that those who are not up to speed can observe and learn from those who are. Interestingly, about 84 percent (38 of 45) of the state officials and nearly 75 percent (53 of 71) of the federal officials who responded to the question rated the “development of regional or local educational workshops for relevant officials that are tailored to their responsibilities” as very or extremely useful, compared to about 67 percent (42 of 63) of the local officials that responded. Development of lists of policy options for government officials: The development of lists of “no regrets” actions—actions in which the benefits exceed the costs under all future climate scenarios—and other potential adaptation policy options could inform officials about efforts that make sense to pursue today and are “worth doing anyway.” The Intergovernmental Panel on Climate Change defines a “no regrets” policy as one that would generate net social and economic benefits irrespective of whether or not anthropogenic climate change occurs. Such policies could include energy conservation and efficiency programs or the construction of green roofs in urban areas to absorb rainwater and moderate the effects of heat waves. About 73 percent (133 of 181) of the officials who responded to our questionnaire rated the “development of lists of ‘no regrets’ actions (i.e., actions in which the benefits exceed the costs under all future climate scenarios)” as either very or extremely useful. The costs of no regrets strategies may be easier to defend, and proposing such strategies could be a way to initiate discussions of additional adaptation efforts. Likewise, about 71 percent (129 of 181) of respondents rated the “development of a list of potential climate change adaptation policy options” as either very or extremely useful. However, several respondents questioned whether national lists of adaptation options would be useful, noting that adaptation is inherently local or regional in nature. For example, one federal official said that “it is unclear that it would be possible to develop a list of actions that truly is no regrets for all scenarios, all places, and all interested parties.” This view suggests that adaptation options—“no regrets” or otherwise—may vary based on the climate impacts observed or projected for different geographic areas. As stated by one local official, “a national list would need to collect options from all regions across many sectors to be useful.” Regarding the prioritization of potential adaptation policy options, about 62 percent (113 of 183) of the respondents rated the “prioritization of potential climate change adaptation options” as very or extremely useful, the lowest-rated potential action related to awareness and priorities. Several respondents were adamant that prioritization should occur at the local level because of the variability of local impacts, and others said that federal agencies should assist such efforts, but not direct them. According to one state official respondent, federal efforts “should recognize and meet the needs of states and local governments. They should not…dictate policy.” Interestingly, local officials who responded to our questionnaire rated prioritization of policy options as more useful than federal or state officials. Specifically, about 75 percent (47 of 63) of the local officials who responded to the question said that federal prioritization of potential climate change adaptation options would be very or extremely useful, compared to nearly 57 percent (40 of 70) and about 51 percent (24 of 47) of federal and state officials, respectively. Public education: About 70 percent (129 of 184) of the respondents rated the “creation of a campaign to educate the public about climate change adaptation” as very or extremely useful. A variety of federal, state, and local programs are trying to fill this void, at least in areas of the country that are actively addressing adaptation issues. For example, the Chesapeake Bay National Estuarine Research Reserve (partially funded by NOAA) provides education and training on climate change to the public and local officials in Maryland. Maryland state officials recently provided local officials and the public in Somerset County information on the effects of sea level rise during a workshop. The workshop highlighted the need to incorporate information about sea level rise in the county’s land use plans, given that it is expected to inundate a significant part of the county. In addition, the University of Washington’s Climate Impacts Group (CIG)—a program funded under NOAA’s Regional Integrated Sciences and Assessment program—has been interacting with the public about climate change issues, including adaptation, for over 10 years, according to officials we interviewed as part of our site visit to King County, Washington. Considerable local media coverage of environmental issues has also assisted with public awareness in King County. Federal actions to provide and interpret site-specific information would help address challenges associated with adaptation efforts, based on our analysis of our Web-based questionnaire, site visits, and relevant studies. Table 6 presents potential federal government actions related to information as rated by federal, state, and local officials who responded to our questionnaire. See appendix III for a more detailed summary of federal, state, and local officials’ responses to our Web-based questionnaire. We discuss these potential federal actions below in three general categories: (1) the development of regional, state, and local climate change impact and vulnerability assessments; (2) the development of processes and tools to access, interpret, and apply climate information; and (3) the creation of a federal service to consolidate and deliver climate information to decision makers to inform adaptation efforts. Developing impact and vulnerability assessments: Respondents rated the “development of state and local climate change impact and vulnerability assessments” as the most useful action the federal government could take related to information. The development of regional assessments was also rated as similarly useful by respondents. Such assessments allow officials to build adaptation strategies based on the best available knowledge about regional or local changes and how those changes may affect natural and human systems. Nearly 94 percent (43 of 46) of the state officials and about 83 percent (52 of 63) of the local officials who responded to the question rated the development of state and local climate change impact and vulnerability assessments as either very or extremely useful, compared to about 69 percent (49 of 71) of federal officials. Officials at all of the sites we visited reported relying on impact and vulnerability assessments to drive policy development and focus on the most urgent adaptation needs. For example, King County officials told us that regional climate modeling information provided by CIG was used to conduct a vulnerability assessment of wastewater treatment facilities in the county. In addition, Maryland officials said that the state’s coastal adaptation initiative relied on localized impact and vulnerability information provided by the Maryland Commission on Climate Change’s Scientific and Technical Working Group, a stakeholder working group consisting of scientists and other relevant stakeholders. Development of processes and tools to help officials use information: About 80 percent (148 of 185) of respondents rated the “development of processes and tools to help access, interpret, and apply available climate information” as very or extremely useful. Even with available regional and local climate data, officials will need tools to interpret what the data mean for decision making. For example, CIG told us of the strong need for Web- based decision-making tools to translate climate impacts into information relevant for decision makers. King County’s Department of Natural Resources and Parks has developed a tool that uses data generated by CIG to help wastewater facilities model flooding due to sea level rise and storms. United Kingdom officials noted that the Climate Impacts Programme provides similar tools to assist decision makers in the United Kingdom. The identification and sharing of best practices from other jurisdictions could also help meet the information needs of decision makers. Around 80 percent (126 of 157) of respondents rated the “identification and sharing of best practices” as very or extremely important. Best practices refer to the processes, practices, and systems identified in organizations that performed exceptionally well and are widely recognized as improving performance and efficiency in specific areas. Based on a range of our prior work, we have found that successfully identifying and applying best practices can reduce expenses and improve organizational efficiency. Several officials who responded to our questionnaire said that learning the best practices of others could be useful in efforts to develop adaptation programs. Federal climate service: About 61 percent (107 of 176) of respondents rated the “creation of a federal service to consolidate and deliver climate information to decision makers to inform adaptation efforts” as very or extremely useful. According to two pending bills in Congress that would establish a National Climate Service within NOAA, its purpose would be to advance understanding of climate variability and change at the global, national, and regional levels and support the development of adaptation and response plans by federal agencies and state, local, and tribal governments. Respondents offered a range of potential strengths and weaknesses for such a service. Several said that a National Climate Service would help consolidate information and provide a single-information resource for local officials, and others said that it would be an improvement over the current ad hoc system. A climate service would avoid duplication and establish an agreed set of climate information with uniform methodologies, benchmarks, and metrics for decision making, according to some officials. According to one federal official, consolidating scientific, modeling, and analytical expertise and capacity could increase efficiency. Some officials similarly noted that with such consolidation of information, individual agencies, states, and local governments would not have to spend money obtaining climate data for their adaptation efforts. Others said that it would be advantageous to work from one source of information instead of different sources of varying quality. Importantly, some officials said that a National Climate Service would demonstrate a federal commitment to adaptation and provide a credible voice and guidance to decision makers. Other respondents, however, were less enthusiastic. Some voiced skepticism about whether it was feasible to consolidate climate information, and others said that such a system would be too rigid and may get bogged down in lengthy review processes. Furthermore, certain officials said building such capacity may not be the most effective place to focus federal efforts because the information needs of decision makers vary so much by jurisdiction. Several officials noted that climate change is an issue that requires a multidisciplinary response and a single federal service may not be able to supply all of the necessary expertise. For example, one federal official stated that the information needs of Bureau of Reclamation water managers are quite different from the needs of Bureau of Land Management rangeland managers, which are different from the needs of all other resource management agencies and programs. The official said that it seems highly unlikely that a single federal service could effectively identify and address the diverse needs of multiple agencies. Several respondents also said that having one preeminent source for climate change information and modeling could stifle contrary ideas and alternative viewpoints. Finally, several officials who responded to our questionnaire were concerned that a National Climate Service could divert attention and resources from current adaptation efforts by reinventing duplicative processes without making use of existing structures. A recent NRC report recommends that the federal government’s adaptation efforts should be undertaken through a new integrated interagency initiative with both service and research elements, but that such an initiative should not be centralized in a single agency. Doing so, according to this report, would disrupt existing relationships between agencies and their constituencies and formalize a separation between the emerging science of climate response and fundamental research on climate and the associated biological, social, and economic phenomena. Furthermore, the report states that a National Climate Service located in a single agency and modeled on the weather service would by itself be less than fully effective for meeting the national needs for climate-related decision support. The NRC report also notes that such a climate service would not be user-driven and so would likely fall short in providing needed information, identifying and meeting critical needs for research for and on decision support, and adapting adequately to changing information needs. Federal actions to clarify the roles and responsibilities for government agencies could encourage adaptation efforts, based on our analysis of questionnaire results, site visits, and available studies. Table 7 presents potential federal actions related to the structure and operation of the federal government, as rated by the federal, state, and local officials who responded to our Web-based questionnaire. See appendix III for a more detailed summary of federal, state, and local officials’ responses to our Web-based questionnaire. As discussed below, these potential federal actions can be grouped into three areas: (1) new national adaptation initiatives, (2) review of programs that hinder adaptation efforts, and (3) guidance for how to incorporate adaptation into existing decision-making processes. New national adaptation initiatives: Our questionnaire results identified the “development of a national adaptation fund to provide a consistent funding stream for adaptation activities” as the most useful federal action related to the structure and operation of the federal government. This result is not surprising, given that lack of funding was identified as the greatest challenge to adaptation efforts. One local official said that “funding for local governments is absolutely required. Local budgets are tight and require external stimulus for any hope of adaptation strategies to be implemented.” Several state respondents noted that none of the other potential policy options are maximally useful unless there is also consistent funding available to implement them. Overall, about 98 percent (45 of 46) of state officials and nearly 88 percent (56 of 64) of the local officials who responded to the question rated the development of a national adaptation fund to provide a consistent funding stream for adaptation activities as very or extremely useful, compared to about 71 percent (47 of 66) of federal officials. About 71 percent (129 of 181) of the officials who responded to our questionnaire rated the “development of a national adaptation strategy that defines federal government priorities and responsibilities” as very or extremely useful. As noted by a federal official who responded to our questionnaire, the cost of responding to a changing climate will be paid one way or another—either through ad hoc responses to emergencies or through a coordinated effort at the federal level guided by the best foresight and planning afforded by the current science. According to this official, a strategic approach may cost less than reactive policies in the long term and could be more effective. Officials we spoke with at our site visits and officials who responded to our questionnaire said that a coordinated federal response would also demonstrate a federal commitment to adaptation. About 59 percent (107 of 181) of respondents rated the “development of a climate change extension service to help share and explain available information” as very or extremely useful. A climate change extension service could operate in the same way as USDA’s Cooperative State Research, Education, and Extension Service, with land grant universities and networks of local or regional offices staffed by experts providing useful, practical, and research-based information to agricultural producers, among others. Such a service could be responsible for educating private citizens, city planners, and others at the local level whose responsibilities are climate sensitive. For example, Maryland Forest Service officials noted that the Maryland Cooperative Extension Service provides training and information on the significance of climate change. Several respondents cautioned that whatever is done at the federal level should be consistently and adequately funded. About 54 percent (89 of 166) of respondents rated as very or extremely useful the “creation of a centralized government structure to coordinate adaptation funding.” While some cautioned that such a structure could limit the flexibility of existing federal, state, and local programs, others said that there was a need for more coordinated funding. Support for the idea, however, varied by level of government. Specifically, about 73 percent of the local (41 of 56) and almost 55 percent of the state (23 of 42) officials that responded to this question rated the “creation of a centralized federal government structure to coordinate adaptation funding” as either very or extremely useful, compared to only about 35 percent of the federal (23 of 65) respondents. Reviewing programs that hinder adaptation: About 68 percent (122 of 180) of the respondents said it would be very or extremely useful to systematically review the kind of programs, policies, and practices discussed earlier in this report that may hinder adaptation efforts. Nearly 75 percent (46 of 61) of the local officials and about 70 percent (32 of 46) of the state officials who responded to the question rated the “review of existing programs to identify and modify policies and practices that hinder adaptation efforts” as very or extremely useful, compared to about 59 percent (41 of 70) of federal officials. One state official urged a review of both programs and laws, stating that “entrenched practices must be adapted to new realities.” Our May 2008 report on the economics of climate change also identified actions that could assist officials in their efforts to adapt to climate change. Some of the economists surveyed for that report suggested reforming insurance subsidy programs in areas vulnerable to natural disasters like hurricanes or flooding. Several noted that a clear federal role exists for certain sectors, such as water resource management, which could require additional resources for infrastructure development, research, and managing federal lands. Federal, state, and local respondents also pointed to a number of federal laws as assisting adaptation efforts. For example, multiple officials cited the Global Change Research Act of 1990, which established a federal interagency research program to assist the United States and the world to understand, assess, predict, and respond to human-induced and natural processes of global change. Officials from the New York City Panel on Climate Change credited the 2001 Metro East Coast report issued for USGCRP with increasing awareness of regional climate change effects, which led to local government response. Multiple officials also said that the National Environmental Policy Act could assist adaptation efforts by incorporating climate change adaptation into the assessment process. According to CEQ officials, the federal government could provide adaptation information under the National Environmental Policy Act provision that directs all federal agencies to make available to states, counties, municipalities, and others advice and information useful in restoring, maintaining, and enhancing the quality of the environment. According to certain officials, the Coastal Zone Management Act, which is administered by NOAA, could encourage adaptation to climate change at the state and local levels by allowing states and territories to develop specific coastal climate change plans or strategies. The state of Maryland is already using Coastal Zone Management Act programs to assess and respond to the risk of sea level rise and coastal hazards. Guidance on how to consider adaptation in existing processes: Nearly 66 percent (118 of 180) of respondents rated the “issuance of guidance, policies, or procedures on how to incorporate adaptation into existing policy and management processes” as very or extremely useful. A federal respondent added that adapting to climate change means integrating adaptation strategies into the programs that are already ongoing and will rely upon the networks and institutions that already exist. These sentiments were echoed in a recent report, which suggested that the experience of deliberately incorporating climate adaptation into projects can be very helpful in developing a more systematic approach to adaptation planning and can serve as a kind of project-based policy development. Furthermore, this report notes that leading programs integrate climate change adaptation into overarching policy documents such as official plans or policies. In the same vein, King County officials told us they work to “routinize” climate change into planning decisions and have incorporated climate change into the county’s comprehensive plan. This plan, among other things, states that “King County should consider projected impacts of climate change, including more severe winter flooding, when updating disaster preparedness, levee investment, and land use plans, as well as development regulations.” Several respondents cautioned that federal guidance related to adaptation should be flexible enough to allow state and local governments to adapt their own approaches. Climate change is a complex, interdisciplinary issue with the potential to affect every sector and level of government operations. Strategic planning is a way to respond to this governmentwide problem on a governmentwide scale. Our past work on crosscutting issues suggests that governmentwide strategic planning can integrate activities that span a wide array of federal, state, and local entities. Strategic planning can also provide a comprehensive framework for considering organizational changes, making resource decisions, and holding officials accountable for achieving real and sustainable results. As this report and others demonstrate, some communities and federal lands are already seeing the effects of climate change, and governments are beginning to respond. However, as this report also illustrates, the federal government’s emerging adaptation activities are carried out in an ad hoc manner and are not well coordinated across federal agencies, let alone state and local governments. Officials who responded to our questionnaire at all levels of government said that they face a range of challenges when considering adaptation efforts, including competing priorities, lack of site-specific data, and lack of clear roles and responsibilities. These officials also identified a number of potential federal actions that they thought could help address these challenges. Multiple federal agencies, as well as state and local governments, will have to work together to address these challenges and implement new initiatives. Yet, our past work on collaboration among federal agencies suggests that they will face a range of barriers in doing so. Agency missions may not be mutually reinforcing or may even conflict with each other, making consensus on strategies and priorities difficult. Incompatible procedures, processes, data, and computer systems also hinder collaboration. The resulting patchwork of programs and actions can waste scarce funds and limit the overall effectiveness of the federal effort. In addition, many federal programs were designed decades ago to address earlier challenges, informed by the conditions, technologies, management models, and organizational structures of past eras. Based on our prior work, key practices that can help agencies enhance and sustain their collaborative efforts include defining and articulating a common outcome; agreeing on roles and responsibilities; establishing compatible policies, procedures, and other means to operate identifying and addressing needs by leveraging resources; and developing mechanisms to monitor, evaluate, and report on results. As we have previously reported, perhaps the single most important element of successful management improvement initiatives is the demonstrated commitment of top leaders to change. Top leadership involvement and clear lines of accountability are critical to overcoming natural resistance to change, marshalling needed resources, and building and maintaining the commitment to new ways of doing business. A key question for decision makers in both Congress and the administration is whether to start adapting now or to wait until the effects of climate change are more obvious and widespread. Given the complexity and potential magnitude of climate change and the lead time needed to adapt, preparing for these impacts now may reduce the need for far more costly steps in the decades to come. Adaptation, however, will require making policy and management decisions that cut across traditional sectors, issues, and jurisdictional boundaries. It will mean developing new approaches to match new realities. Old ways of doing business—such as making decisions based on the assumed continuation of past climate conditions—will not work in a world affected by climate change. Certain state and local authorities on the “front lines” of early adaptation efforts understand this new reality and are beginning to take action. Our analysis of these efforts, responses to our questionnaire, and available studies revealed that federal, state, and local officials face numerous challenges when considering adaptation efforts. To be effective, federal efforts to address these challenges must be coordinated and directed toward a common goal. We recommend that the appropriate entities within the Executive Office of the President, such as the Council on Environmental Quality and the Office of Science and Technology Policy, in consultation with relevant federal agencies, state and local governments, and key congressional committees of jurisdiction, develop a national strategic plan that will guide the nation’s efforts to adapt to a changing climate. The plan should, among other things, (1) define federal priorities related to adaptation; (2) clarify roles, responsibilities, and working relationships among federal, state, and local governments; (3) identify mechanisms to increase the capacity of federal, state, and local agencies to incorporate information about current and potential climate change impacts into government decision making; (4) address how resources will be made available to implement the plan; and (5) build on and integrate ongoing federal planning efforts related to adaptation. We provided a draft of this report to the Council on Environmental Quality (CEQ), within the Executive Office of the President, for review and comment. CEQ circulated the report to the climate change adaptation interagency committee—including representatives from more than 12 agencies—for review and comment. In written comments, CEQ’s Deputy Associate Director for Climate Change Adaptation generally agreed with the recommendations of the report, noting that leadership and coordination is necessary within the federal government to ensure an effective and appropriate adaptation response and that such coordination would help to catalyze regional, state, and local activities. These comments are reproduced in appendix IV. CEQ also provided technical comments, which we incorporated, as appropriate. With regard to the report’s findings, the Deputy Associate Director stated that CEQ had three main areas of concern. First, CEQ expressed concern that the relative inexperience of the federal government on adaptation combined with the methodology used in this report may produce misleading results. Specifically, the Deputy Associate Director stated that the report documents the relatively low level of activity within the federal government on adaptation, suggesting that most federal government respondents must be relatively inexperienced with adaptation issues. The Deputy Associate Director further stated that this relative federal inexperience may call some of our findings into question, citing as an example that the variability and local nature of adaptation makes a federally produced list of “no regrets” actions very difficult and possibly of limited utility. CEQ noted that, while the questionnaire results are an accurate reflection of the respondents’ thinking, they do not necessarily paint the best roadmap for federal government action. We do not agree with the characterization of federal officials as less experienced with adaptation issues than their state and local counterparts. As noted in the report scope and methodology (see app. I), we administered a Web-based questionnaire to a nonprobability sample of 274 federal, state, and local officials who were identified by their organizations to be knowledgeable about climate change adaptation. The officials who responded represent a diverse array of disciplines, including planners, scientists, and public health professionals. In general, the information we collected with the questionnaire suggests that the federal, state, and local officials who responded spend similar amounts of time on adaptation- related issues. We found that, in several instances, the state and local officials who were knowledgeable about adaptation worked very closely with their federal counterparts. Furthermore, regarding CEQ’s specific example of federally produced “no regrets” lists, as we point out in this report, we agree that adaptation actions need to reflect local realities. However, questionnaire results were never intended to provide a roadmap specifically for federal activities but instead to describe the views of federal, state, and local officials on the potential federal actions (previously cited in available literature) that would be most useful to them. This information could be helpful when developing a strategy, but was not intended to be the strategy. We acknowledge that efforts to pursue these actions would often be collaborative, involving state and local entities. Second, CEQ expressed concern that the report confuses the issue of cost- benefit analysis and scientific uncertainty, noting that the report identifies “justifying current costs with limited information about future benefits” as a challenge to adaptation policy, although the discussion of this challenge focuses on the scientific uncertainty inherent in climate projections as the main stumbling block for cost-benefit analysis. The Deputy Associate Director also noted that this section of the report did not include other challenges identified in the questionnaire, such as “understanding costs and benefits” of adaptive actions, or the challenge of prioritizing adaptation against other near-term actions and that cost-benefit analysis is a separate concern to scientific uncertainty. Although we recognize CEQ’s concern about this section of the report, we note that the report describes the link between scientific uncertainty and cost-benefit analysis and that the report describes many challenges other than scientific uncertainty. Uncertainty, scientific or otherwise, is generally incorporated into cost-benefit analysis as a best practice. We also note that the challenges and potential federal actions described in this report are closely related. As described in the subsequent section, for example, local impacts must be translated into costs and benefits, since this information is required for many decision-making processes. Almost 70 percent (126 of 180) of the respondents to our questionnaire rated “understanding the costs and benefits of adaptation efforts” as very or extremely challenging. Finally, CEQ expressed concern that the report does not focus enough on implementation challenges, stating that the report does not analyze the primary barriers or challenges to implementation, or make any recommendations on implementing adaptation. The Deputy Associate Director acknowledged that planning is critical, but that it does not guarantee implementation and that implementation challenges are neither discussed nor developed in the report. We agree that planning does not guarantee implementation and note that many of the challenges explored in this report relate to implementation. However, wide-scale implementation of adaptive actions before deriving a reasoned plan strikes us as “putting the cart before the horse.” Without adequate planning at the federal level to chart a roadmap that, among other things, defines a common outcome and sets roles and responsibilities, it will be more difficult for multiple federal agencies, as well as state and local governments to work together to devise, much less execute, an implementation strategy. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chair of the Council on Environmental Quality and other interested parties. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or stephensonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our review (1) determines what actions, if any, federal, state, local, and international authorities are taking to adapt to a changing climate; (2) identifies the challenges, if any, that federal, state, and local officials reported facing in their efforts to adapt; and (3) identifies actions that Congress and federal agencies could take to help address these challenges. We also provide information about our prior work on responding to similarly complex, interdisciplinary issues. To determine the actions federal authorities are taking to adapt to climate change, we obtained summaries of current and planned adaptation-related efforts from a broad range of federal agencies. Full summaries from federal agencies are provided in a supplement to this report (see GAO-10-114SP). We obtained these summaries from the federal agencies with assistance from the U.S. Global Change Research Program (USGCRP), formerly the United States Climate Change Science Program. USGCRP coordinates and integrates federal research on changes in the global environment and their implications for society. USGCRP collected submissions from 12 of the 13 departments and agencies that participate in its program (see app. II for more details). We also obtained a summary of adaptation-related efforts from the Federal Emergency Management Agency, part of the U.S. Department of Homeland Security, as a follow up to prior GAO work on climate change and the Federal Emergency Management Agency’s National Flood Insurance Program. Because the U.S. Department of Homeland Security is not part of USGCRP, we solicited its submission directly. Because we wanted to include current federal activities that the agencies themselves consider to be related to adaptation, we did not modify the content of these summaries, except to remove references to specific individuals. We also did not independently confirm the information in the summaries. In addition, because the request for summaries was made to a select group of federal agencies, the activities compiled in this report should not be considered a comprehensive list of all recent and ongoing climate change adaptation efforts across the federal government. In addition to gathering summaries, we also conducted an Internet search to identify other federal, state, or local organizations that are taking action to adapt to a changing climate. This search also helped to identify challenges agencies face in their efforts to adapt, as well as actions the federal government could take, which are relevant to our second and third objectives. We searched the Web sites of relevant organizations and agencies, such as the Intergovernmental Panel on Climate Change, the Pew Center on Global Climate Change, the Coastal States Organization, and federal agencies such as the Environmental Protection Agency and the National Oceanic and Atmospheric Administration. We also conducted Internet searches using relevant key words, such as “climate change” and “climate change adaptation.” We reviewed publicly available English- language documents related to adaptation efforts in the United States and other countries that we identified through our search. To address our three objectives, we also conducted 13 open-ended interviews with a select group of organizations and agencies that are engaged in climate change adaptation activities. We selected them based on their level of involvement in the issue of climate change adaptation, as determined by (1) previous GAO work; (2) scoping interviews (a “snowball” technique); and (3) our search of the background literature. We attempted to speak with organizations that are working on climate change adaptation, as well as those that represent sectors affected by it. We generally focused on organizations and sectors that are working on this issue on a national level (rather than just in one city or region) and that have also worked closely with state and local officials. The organizations included the National Association of Clean Water Agencies, the H. John Heinz III Center for Science, Economics, and the Environment, ICLEI— Local Governments for Sustainability, and the Nature Conservancy, among others. In addition, we spoke with two academics who had a long-standing involvement with climate change issues at the national and international levels to gather additional background information on the issue. Because we spoke with a select group of organizations and individuals, we cannot generalize our results to those we did not interview. In addition to asking our interviewees about the actions they are taking to address adaptation, we also asked them to identify other relevant reports or studies we should include in our work and other agencies or organizations that are engaged in adaptation activities (part of our “snowball” technique). We also asked what actions they thought the federal government and Congress could take to help in their efforts. To determine the actions federal, state, local, and international authorities are taking to adapt to a changing climate, we also visited four sites where government officials are taking actions to adapt. We chose these sites because they were frequently mentioned in the background literature and scoping interviews as examples of locations that are implementing climate change adaptation and which may offer particularly useful insights into the types of actions governments can take to plan for climate change impacts. These sites are neither comprehensive nor representative of all state and local climate change adaptation efforts. They include New York City; King County, Washington; the state of Maryland; and the United Kingdom, focusing on the London region. We included an international site visit to examine how other countries are starting to adapt, and we specifically selected the United Kingdom because its climate change adaptation efforts were mentioned frequently in the background literature and scoping interviews and because it had already begun to implement these efforts at the national, regional, and local levels. During our site visits, we gathered information through interviews with officials and stakeholders, observation of adaptation efforts, and reviewed related documents. We also followed up with officials after our visits to gather additional information. To describe the challenges that federal, state, and local officials face in their efforts to adapt and the potential actions that Congress and federal agencies could take to help address these challenges, we administered a Web-based questionnaire to a nonprobability sample of 274 federal, state, and local officials who were identified by their organizations to be knowledgeable about adaptation. To identify relevant potential respondents, we worked with organizations that represent federal, state, and local officials. Specifically, we worked with organizations such as USGCRP (federal), National Association of Clean Air Agencies (state), and Conference of Mayors (local), among others, and asked them to identify officials who are knowledgeable about climate change adaptation. These officials were generally identified through their involvement in climate change working groups within these organizations, which indicated a level of interest and knowledge of the issue. The officials were then contacted by their organization to describe the purpose of our questionnaire and to ask if they would participate. The names and e-mail addresses of those who agreed were then provided to GAO. The federal, state, and local officials who responded represent a diverse array of disciplines, including planners, scientists, and public health professionals; however, their responses cannot be generalized to officials who did not complete our questionnaire. To develop the questionnaire, information was compiled from background literature and interviews we conducted with relevant organizations and officials. Using this information, we developed lists of challenges and potential actions the federal government could take to address them. Using closed-ended questions, respondents were asked to rate several challenges and actions on 5 point Likert scales (the closed-ended questions are reproduced in app. III). We also included open-ended questions to give respondents an opportunity to tell us about challenges and potential federal actions that we did not ask about. Lastly, we included additional open-ended questions to gather opinions on a small number of related topics. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any questionnaire may introduce errors, commonly known as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or analyzing data can introduce unwanted variability in the results. We took steps to minimize such nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff who had subject matter expertise. Then, we sent a draft of the questionnaire to several federal, state, and local organizations for comment. In addition, we pretested it with local, state, and federal officials to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, and (4) the questionnaire was comprehensive and unbiased. Based on these steps, we made necessary corrections and edits before it was administered. When we analyzed the data, an independent analyst checked all computer programs. Since this was a Web-based instrument, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database, minimizing error. We developed and administered a Web-based questionnaire accessible through a secure server. When we completed the final questionnaire, including content and form, we sent an e-mail announcement of the questionnaire to our nonprobability sample of 274 federal, state, and local officials on May 13, 2009. They were notified that the questionnaire was available online and were given unique passwords and usernames on May 28, 2009. We sent follow-up e-mail messages on June 4, June 8, and June 12, 2009, to those who had not yet responded. Then we contacted the remaining nonrespondents by telephone to encourage them to complete the questionnaire online, starting on June 24, 2009. The questionnaire was available online until July 10, 2009. Questionnaires were completed by 187 officials, for a response rate of about 68 percent. The response rate by level of government is about 82 percent for federal officials (72 out of 88), about 90 percent for state officials (47 out of 52), and about 50 percent (65 out of 131) for local officials. We presented our questionnaire results in six tables in our report, which show the relative rankings of the challenges and potential actions listed in our questionnaire based on the percentage of respondents that rated them very or extremely challenging (for challenges) or very or extremely useful (for potential actions). Both the challenges and potential actions are organized into groups related to the following: (1) awareness and priorities, (2) information, and (3) the structure and operation of the federal government. Tables showing more detailed summaries of federal, state, and local officials’ responses to the questionnaire are included in appendix III. We conducted this performance audit from September 2008 to October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We obtained information from 13 selected federal departments and agencies on their current and planned climate change adaptation efforts. We present this information in a supplement to this report to provide a more complete picture of the activities that federal agencies consider to be related to climate change adaptation than has been available publicly (see GAO-10-114SP). We obtained this information directly from the agencies participating in the U.S. Global Change Research Program. Importantly, we did not modify the content of the agency submissions (except to remove references to named individuals) or assess its validity. In addition, because this information represents the efforts of a selected group of federal agencies, the agency activities compiled in the supplement should not be considered a comprehensive list of all recent and ongoing climate change adaptation efforts across the federal government. Any questions about the information presented in the supplement should be directed to the agencies themselves. Appendix III: Summary of Federal, State, and Local Officials’ Responses to Web-Based Questionnaire (3) How challenging are each of the following for officials when considering climate change adaptation efforts? In addition to the contact named above, Steve Elstein (Assistant Director), Charles Bausell, Keya Chateauneuf, Cindy Gilbert, William Jenkins, Richard Johnson, Kirsten Lauber, Ty Mitchell, Benjamin Shouse, Jeanette Soares, Ruth Solomon, Kiki Theodoropoulos, and Joseph Thompson made key contributions to this report. Camille Adebayo, Holly Dye, Anne Johnson, Carol Kolarik, Jessica Lemke, Micah McMillan, Leah Probst, Jena Sinkfield, and Cynthia Taylor also made important contributions.
Changes in the climate attributable to increased concentrations of greenhouse gases may have significant impacts in the United States and the world. For example, climate change could threaten coastal areas with rising sea levels. Greenhouse gases already in the atmosphere will continue altering the climate system into the future, regardless of emissions control efforts. Therefore, adaptation--defined as adjustments to natural or human systems in response to actual or expected climate change--is an important part of the response to climate change. GAO was asked to examine (1) what actions federal, state, local, and international authorities are taking to adapt to a changing climate; (2) the challenges that federal, state, and local officials face in their efforts to adapt; and (3) actions that Congress and federal agencies could take to help address these challenges. We also discuss our prior work on similarly complex, interdisciplinary issues. This report is based on analysis of studies, site visits to areas pursuing adaptation efforts, and responses to a Web-based questionnaire sent to federal, state, and local officials. While available information indicates that many governments have not yet begun to adapt to climate change, some federal, state, local, and international authorities have started to act. For example, the U.S. National Oceanic and Atmospheric Administration's Regional Integrated Sciences and Assessments program supports research to meet the adaptation-related information needs of local decision makers. In another example, the state of Maryland's strategy for reducing vulnerability to climate change focuses on protecting habitat and infrastructure from future risks associated with sea level rise and coastal storms. Other GAO discussions with officials from New York City; King County, Washington; and the United Kingdom show how some governments have started to adapt to current and projected impacts in their jurisdictions. The challenges faced by federal, state, and local officials in their efforts to adapt fell into three categories, based on GAO's analysis of questionnaire results, site visits, and available studies. First, competing priorities make it difficult to pursue adaptation efforts when there may be more immediate needs for attention and resources. For example, about 71 percent (128 of 180) of the officials who responded to our questionnaire rated "non-adaptation activities are higher priorities" as very or extremely challenging. Second, a lack of site-specific data, such as local projections of expected changes, can reduce the ability of officials to manage the effects of climate change. For example, King County officials noted that they are not sure how to translate climate data into effects on salmon recovery. Third, adaptation efforts are constrained by a lack of clear roles and responsibilities among federal, state, and local agencies. Of particular note, about 70 percent (124 of 178) of the respondents rated the "lack of clear roles and responsibilities for addressing adaptation across all levels of government" as very or extremely challenging. GAO's analysis also found that potential federal actions for addressing challenges to adaptation efforts fell into three areas. First, training and education efforts could increase awareness among government officials and the public about the impacts of climate change and available adaptation strategies. Second, actions to provide and interpret site-specific information would help officials understand the impacts of climate change at a scale that would enable them to respond. For instance, about 80 percent (147 of 183) of the respondents rated the "development of state and local climate change impact and vulnerability assessments" as very or extremely useful. Third, Congress and federal agencies could encourage adaptation by clarifying roles and responsibilities. About 71 percent (129 of 181) of the respondents rated the development of a national adaptation strategy as very or extremely useful. Climate change is a complex, interdisciplinary issue with the potential to affect every sector and level of government operations. Our past work on crosscutting issues suggests that governmentwide strategic planning--with the commitment of top leaders--can integrate activities that span a wide array of federal, state, and local entities.
The first federal effort to publicly display comprehensive data on federal awards was USAspending.gov. Among other things, the Federal Funding Accountability and Transparency Act of 2006 (FFATA) required OMB to establish a free, publicly accessible website containing data on federal awards no later than January 1, 2008. In addition, OMB was required to include data on subawards by January 1, 2009. The act specified a number of required data fields, including the recipient’s name, funding agency, amount of award, and a descriptive title. The act also authorized OMB to issue guidance and instructions to federal agencies for reporting award information and requires agencies to comply with that guidance. OMB launched USAspending.gov to meet the act’s requirements, relying primarily on federal sources of information. In 2010, we reported on compliance with FFATA’s requirements. In that report we presented several key findings, including:  OMB had satisfied six of the act’s nine requirements we reviewed and partially satisfied another, but did not satisfy two requirements (see appendix II for details). For example, OMB established the publicly searchable website–USAspending.gov–in December 2007. The site included the required data elements and search capabilities, and OMB guidance required periodic updates from agencies consistent with the act’s requirement for timeliness. OMB partially satisfied the act’s requirement to establish a pilot to test the collection of subaward data. Although it started pilots at two agencies, they were initiated after the date provided for in the act. Also, OMB had not satisfied the provision requiring the inclusion of subaward data on the website by January 2009 or the provision regarding periodic reporting to Congress.  Although USAspending.gov included required grant information from 29 agencies for fiscal year 2008, it did not include grant information from 15 programs at 9 other agencies. The unlisted awards were made by large agencies, including the Department of the Treasury and General Services Administration, and smaller agencies such as the U.S. Election Assistance Commission and Japan-U.S. Friendship Commission. We reported that incomplete reporting by agencies was due in part to OMB not implementing a process to identify agencies that did not report required award information and stated that without such a process, it risked continued data gaps that limited the usefulness of the site. In a sample of 100 awards from USAspending.gov that we reviewed, each had at least one data error in a required field, consisting of either a blank data field, an inconsistency between the USAspending.gov data and agency records, or a lack of sufficient agency information to determine consistency. In 73 of the sampled awards, 6 or more of the 17 required data fields exhibited an error. Agency officials attributed the lack of sufficient information, in part, to procedures and systems that did not include documenting all of the data required by FFATA. For those awards where we had enough information to judge sufficiency, the data field with the most inconsistencies was the award title, which often lacked necessary specificity. This weakness was attributed, in part, to the lack of specific guidance from OMB and to the lack of tools to identify incomplete reports. We reported that until OMB addressed these issues, the ability of the public to find requested information and of OMB to correct errors would be limited. To address these findings, we made several recommendations to the Director of OMB. For example, we recommended that OMB revise its guidance to agencies to clarify that award titles should describe the purpose of each award and how agencies should validate and document their submitted data. We also recommended that OMB develop and implement a plan to collect and report subaward data, as well as a procedure to regularly ensure that agencies report required award information. OMB generally agreed with our findings and recommendations. Since we last evaluated FFATA compliance, OMB has taken steps to improve USAspending.gov and the quality of its data through increased agency-level accountability and government-wide improvements. First, in OMB’s 2009 Open Government Directive, agencies were directed to designate a high-level senior official to be accountable for the quality of, and internal controls over, federal spending information disseminated on public websites. A list of the agency-designated officials appears on USAspending.gov. Subsequently, in an April 2010 memorandum to senior accountable officials, OMB required agencies to establish a data quality framework for federal spending information, including a governance structure, risk assessments, control activities, and monitoring program. Agencies were directed to submit plans for addressing these requirements to OMB. To address government-wide weaknesses, OMB issued guidance to agencies on improving the quality of data in the Federal Procurement Data System, a contract database that is one of the main sources of USAspending.gov data. In addition, OMB’s April 2010 memo established a deadline for the agency collection of subaward data and announced technical improvements to USAspending.gov, including a move to a cloud computing environment, and a control board to coordinate policies and systems that support the collection and presentation of federal spending data. One result of these efforts is the current availability of subaward data on USAspending.gov. Agencies have also reported taking steps to improve their USAspending.gov data. For example, automated tools have been developed through interagency electronic government initiatives that are expected to improve the quality of data on grants and cooperative agreements by making it easier for agencies to regularly report their awards. Additionally, individual agencies reported efforts to improve data quality in open government plans released earlier this year. For example, the Department of Commerce established a formal process to ensure that all grant offices are reporting awards in a timely manner, and the General Services Administration developed an "Information and Data Quality Handbook" that contains a framework for consistent data management. Agencies also reported ongoing efforts to improve data quality. For example, the Department of Homeland Security plans to improve the accuracy and timeliness of data posted on USAspending.gov by promulgating best practices, and the Department of Transportation is working with its components to develop memorandums of understanding to ensure they meet quality assurance reporting guidelines. While the steps discussed above could contribute to improvements in the quality of spending data, their impact is not yet known because OMB’s recent reporting on data quality and user feedback has been limited. Specifically:  Previously available information on the timeliness and completeness of agency-submitted data is no longer provided on USAspending.gov. We previously reported that OMB maintained a page at USAspending.gov that addressed the completeness of the agency- submitted data by field. That information is no longer available on the site. In its April 2010 memo, OMB discussed the creation of a dashboard on USAspending.gov to track the timeliness, completeness, and accuracy of agencies’ reported data. After establishing a baseline, these data were to be updated quarterly. However, the USAspending.gov site does not currently include such a dashboard.  OMB has produced only one of the required annual reports to Congress that were to include data on usage of the site and public feedback on its utility. In July 2010, OMB reported that USAspending.gov had been used extensively by the public, and that it had adopted or planned improvements based on user feedback. However, OMB has not produced any subsequent reports, as required by FFATA. On July 13, 2012, officials with OMB’s Office of Federal Financial Management told us that OMB no longer plans to rely on a public dashboard to improve data quality. Instead, the officials said, OMB is pursuing several other efforts, including ensuring the implementation of the data quality framework established through its prior guidance and identifying best practices for improving data quality. As we initiate work to address your recent request on spending transparency, we will reassess the quality of data on USAspending.gov, including the extent to which agencies report award data, the accuracy of the data that are reported, and the quality assurance processes used by agencies. As Congress and the administration crafted the American Recovery and Reinvestment Act of 2009 (Recovery Act), they built into it provisions to increase transparency and accountability over spending. It required recipients of Recovery Act funds, including grants, contracts, or loans, to submit quarterly reports with information on each project or activity, including the amount and use of funds and an estimate of the jobs created or retained. Similar to FFATA, the Recovery Act called for the establishment of a website through which the public could gain easy access to this information. Initial establishment of the website was to take place 30 days after the Recovery Act’s enactment. The Recovery.gov site was launched in 2009 to fulfill these requirements, and a second site— http://www.FederalReporting.gov—was established for recipients to report their data. Recipients first reported in early October 2009 on the period from February through September 2009, and reporting has continued for each quarter since then. The transparency envisioned under the Recovery Act for tracking spending and results was unprecedented for the federal government. Tracking billions of dollars disbursed to thousands of recipients promised to be an enormous effort. Further, the need to get a system developed and operating quickly added to the challenge, as did the fact that the public would be able to access the system and form its own views as to the system’s transparency. The system also needed to be operational quickly for a variety of programs, across which even the basic question of what constituted a project could differ. Given this daunting task, OMB and the Recovery Board implemented an iterative process involving many stakeholders that can provide insight into challenges and solutions for establishing procedures to increase spending transparency. As part of our oversight of the Recovery Act and in response to a mandate to comment quarterly on recipient reporting, we issued a number of reports addressing procedures related to recipient reporting and the quality of data on Recovery.gov, and we made several recommendations for improvements. Initially, we reported that a range of significant reporting and data quality issues needed to be addressed; our later reports, however, documented both progress and further refinements needed, and progress in making them. Our recommendations included that OMB clarify the definition of full-time equivalent (FTE) jobs and encourage federal agencies to provide or improve program-specific guidance for recipients. In general, OMB and agencies acted upon our recipient reporting-related recommendations and implemented changes in guidance and procedures. Throughout the development of guidance and the early months of implementing recipients reporting, OMB and the Recovery Board used several opportunities for two-way communication with recipients. Early on, OMB and Recovery Board officials held weekly conference calls with state and local representatives to hear comments and suggestions from them and share decisions made. State and local governments, with their difficult fiscal situations, were concerned about being able to meet the added reporting requirements, and the tight timeframes made this particularly difficult. Federal officials heard the concerns and made changes to their plans and related guidance in response. For example, initial guidance in February 2009 began to lay out information that would be reported on Recovery.gov and steps needed to meet reporting requirements, such as including recipient reporting requirements in grant awards and contracts. In response to requests for more clarity, OMB, with input from an array of stakeholders, issued more guidance in June 2009. The June guidance clarified requirements on reporting jobs, such as which recipients were required to report and how to calculate jobs created and retained. In December 2009, responding to concerns regarding the calculation of FTEs, including some we expressed, OMB issued further changes in guidance resulting in simplified jobs-reporting guidance. Recipients of Recovery Act funds needed to quickly learn reporting requirements and develop procedures for meeting them. This was particularly difficult for entities that had not previously received federal funding and were not familiar with federal reporting requirements. Outreach from OMB and the Recovery Board, including conference calls, webinars, and websites, along with guidance were instrumental in bringing recipients up to speed. In addition, agencies provided information and training on reporting for their specific programs through conference calls and webinars. States, as the prime recipients in many cases, ensured that their own agencies and departments and their subrecipients were informed as well by using various means of communications, including conference calls, webinars, and websites. Finally, the Recovery Board also maintained a help desk during the reporting period. Even so, given the uncertainties and ongoing development of the new systems, there were instances of systems going down and data rejections that frustrated recipients. Some extensions were allowed and provisions made for recipients to report and make adjustments to the data, except for FTEs, after reporting closed. After we reported that initially there were significant reporting and quality issues, OMB issued guidance to federal agencies that incorporated lessons learned from the first reporting period and addressed recommendations we had made. Specifically, in December 2009, OMB required agencies to identify significant errors, particularly in award amounts, FTEs, federal award numbers, and recipient name. OMB also provided guidance in identifying instances where recipients did not report. As a result, federal agencies that awarded Recovery Act funds to states generally developed internal policies and procedures for reviewing data quality, as OMB required. At the ground level, agencies addressed recipients’ quarterly reporting when performing their oversight of Recovery Act recipients. Further, agencies also reviewed data centrally and performed tests of reasonableness on recipient data by program. OMB also required agencies to provide lists each quarter of those recipients who did not report. Our discussions with agencies indicated that agencies worked with these recipients to identify reasons they did not report. Lists of those who did not report each quarter continue to be available on Recovery.gov. In our work evaluating recipient reporting under specific programs, we found that agencies put considerable effort into ensuring accuracy and completeness, but while the public transparency of Recovery Act spending improved, the agencies often did not benefit much from such recipient-reported data. Agency officials told us they already had much of the information; their own systems provided information on award amounts, funds disbursed, and, to varying degrees, progress being made by grant recipients. However, officials at one agency, the Department of Education, noted that the information obtained through recipient reporting did provide them a useful indication of jobs funded for education programs under the Recovery Act, information they otherwise did not have. Our work also identified some concerns with ensuring that descriptions of awarded projects were adequately detailed in the information that recipients reported. Data collected for Recovery.gov included narrative information that provided the public with details such as the overall purpose of the award and expected results. We found, for example, that an estimated 25 percent of the descriptions of selected infrastructure- related awards met our transparency criteria of having sufficiently clear and complete information on the award’s purpose, scope, and nature of activities; location; cost; outcomes; and status of work. Another 68 percent partially met the criteria; and an estimated 7 percent provided little or none of this information. In its September 2010 guidance, OMB added a requirement for agencies to review the narrative fields of recipient reports to better ensure transparency. Our analysis of the quality of recipient-reported data showed that recipients made errors in reporting award identification numbers, amount of awards, and other data that agencies already had, and that if those items had been pre-populated for recipients, errors might have been reduced. The award identification number was a particularly key data element, since it was part of the mechanism to link awards across quarters, yet recipient errors as small as leaving out a hyphen could result in information not being able to be linked. Agencies identified other errors, such as incorrect award amounts, by comparing data recipients reported with data they had. It was time-consuming both to perform those comparisons and to follow up with recipients to get them to fix the errors. The Recovery Board eventually enabled recipients to “copy forward” information reported in previous rounds and modify it as needed, which helped prevent some errors. However, some agency officials suggested that pre-populating these fields with agency data before recipients began their reporting would have reduced the number of errors. In addition, our work indicated that recipients sometimes were required to report similar information into both agency reporting systems and FederalReporting.gov. Agencies required more data in some cases to manage their programs than was required on recipient reports and made available on Recovery.gov. For example, Environmental Protection Agency officials said that they needed project details that were not available in Recovery.gov data for their Recovery Act water programs. Similarly, the Department of Transportation preferred using its own data because they were more detailed, and were reported monthly—more frequently than the Recovery.gov data. While the time constraints of implementing Recovery Act reporting made it difficult to consolidate data collection and prevent collecting similar data from recipients more than once, if more planning time was available to solve this issue, the burden on recipients may have been reduced. There are initiatives under way in Congress and the administration that look to build on these two transparency efforts now in place. For example, legislation has been passed in the House of Representatives and introduced in the Senate to improve the accountability and transparency of federal spending. In addition, in June 2011 the President issued an executive order establishing the Government Accountability and Transparency Board to provide strategic direction for, among other things, enhancing the transparency of federal spending. There are lessons from the implementation of both USAspending.gov and Recovery.gov that can be applied to these new initiatives. Foremost, consideration needs to be given to what objectives are to be achieved and in what priority. As we have seen with both existing systems, success hinges upon ensuring the data’s completeness and accuracy. Because it is resource-intensive to ensure all data are reported and correct, it is imperative to limit the data collected to only those essential elements. Clear objectives are helpful in guiding such focus. In addition, the input of federal agencies, recipients, and subrecipients should be considered early in the development of both the system and its procedures. Also, as the system is implemented, communicating impending changes as soon as possible allows for better planning. Finally, as a system rolls out, recipients will need help to learn how to fulfill their reporting responsibilities. Further related to the issue of involving all stakeholders is the need to recognize the increased reporting and oversight effort required of recipients and federal agencies, and to identify approaches that minimize that effort. For example, pre-populating data from federal agencies to reduce the need for recipients to input those data could help with accuracy, although agencies likely will need to continue to play a key role in checking data quality. - - - - - In conclusion, there have been great strides in increasing the transparency of federal awards since 2006. The USAspending.gov and Recovery.gov websites offer the public a wealth of information on how federal funds are spent. However, it is important that ongoing efforts to improve the data provided to the public continue to evolve. We believe having a strategic vision, ensuring data quality, allowing for input of ideas, helping those who have to report, and minimizing reporting burdens can improve the chances of success. Chairman Lieberman, Ranking Member Collins, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have. For further information regarding this testimony, please contact Stanley J. Czerwinski at (202) 512-6808 or czerwinskis@gao.gov or David A. Powner at (202) 512-9286 or pownerd@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Carol Patey and James R. Sweetman, Jr., Assistant Directors, Lee McCracken, and Kevin Walsh. Recovery Act: As Initial Implementation Unfolds in States and Localities, Continued Attention to Accountability Issues Is Essential. GAO-09-580. Washington, D.C.: April 23, 2009. Recovery Act: States’ and Localities’ Current and Planned Uses of Funds While Facing Fiscal Stresses. GAO-09-829. Washington, D.C.: July 8, 2009. Recovery Act: Funds Continue to Provide Fiscal Relief to States and Localities, While Accountability and Reporting Challenges Need to Be Fully Addressed. GAO-09-1016. Washington, D.C.: September 23, 2009. Recovery Act: Recipient Reported Jobs Data Provide Some Insight into Use of Recovery Act Funding, but Data Quality and Reporting Issues Need Attention. GAO-10-223. Washington, D.C.: November 19, 2009. Recovery Act: Status of States’ and Localities’ Use of Funds and Efforts to Ensure Accountability. GAO-10-231. Washington, D.C.: December 10, 2009. Recovery Act: One Year Later, States’ and Localities’ Uses of Funds and Opportunities to Strengthen Accountability. GAO-10-437. Washington, D.C.: March 3, 2010. Recovery Act: States’ and Localities’ Uses of Funds and Actions Needed to Address Implementation Challenges and Bolster Accountability. GAO- 10-604. Washington, D.C.: May 26, 2010. Recovery Act: Increasing the Public’s Understanding of What Funds Are Being Spent on and What Outcomes Are Expected. GAO-10-581. Washington, D.C.: May 27, 2010. Recovery Act: States Could Provide More Information on Education Programs to Enhance the Public’s Understanding of Fund Use. GAO-10- 807. Washington, D.C.: July 30, 2010. Recovery Act: Opportunities to Improve Management and Strengthen Accountability over States’ and Localities’ Uses of Funds. GAO-10-999. Washington, D.C.: September 20, 2010. Participants in SBA’s Microloan Program Could Provide Additional Information to Enhance the Public’s Understanding of Recovery Act Fund Uses and Expected Outcomes. GAO-10-1032R. Washington, D.C.: September 29, 2010. Recovery Act: Opportunities Exist to Increase the Public’s Understanding of Recipient Reporting on HUD Programs. GAO-10-966. Washington, D.C.: September 30, 2010. Recovery Act: Head Start Grantees Expand Services, but More Consistent Communication Could Improve Accountability and Decisions about Spending. GAO-11-166. Washington, D.C.: December 15, 2010. Recovery Act: Energy Efficiency and Conservation Block Grant Recipients Face Challenges Meeting Legislative and Program Goals and Requirements. GAO-11-379. Washington, D.C.: April 7, 2011. Recovery Act: Funding Used for Transportation Infrastructure Projects, but Some Requirements Proved Challenging. GAO-11-600. Washington, D.C.: June 29, 2011. Recovery Act: Funds Supported Many Water Projects, and Federal and State Monitoring Shows Few Compliance Problems. GAO-11-608. Washington, D.C.: June 29, 2011. Recovery Act Education Programs: Funding Retained Teachers, but Education Could More Consistently Communicate Stabilization Monitoring Issues. GAO-11-804. Washington, D.C.: September 22, 2011. Recovery Act: Progress and Challenges in Spending Weatherization Funds. GAO-12-195. Washington, D.C.: December 16, 2011. Recovery Act: Housing Programs Met Spending Milestones, but Asset Management Information Needs Evaluation. GAO-12-634. Washington, D.C.: June18, 2012. GAO’s assessment of compliance Met OMB launched USAspending.gov, a free, publicly available website, in December 2007. Met The site captured information on all required data elements, such as the entity receiving the award and the award amounts. Met The site allowed searches of data by all required data elements and provided totals for awards made as well as downloadable data. Met The site included data for federal awards made in fiscal year 2007 and later, as well as limited data from previous years. Met To facilitate timeliness of data available on the website, OMB guidance required agencies to submit award data on the 5th and 20th of each month. Met The site included a contact form for public comments and suggestions. Partially met OMB commissioned two pilot programs for collecting subaward data, one at the General Services Administration that ran from April 2008 to December 2008, and one at the Department of Health and Human Services that ran from October 2008 to November 2008. Both pilots were begun after the July 2007 date specified in the act. FFATA Requirement Include subaward data no later than January 1, 2009 (An 18-month extension can be granted for subaward recipients that receive federal funds through state, local, or tribal governments if OMB determines that compliance would impose an undue burden on the subaward recipient.) GAO’s assessment of compliance Not met Subaward data (e.g., subcontracts and subgrants) were not yet available for searching on USAspending.gov. FFATA allows OMB to extend the deadline by 18 months for some subaward recipients. However, according to OMB, there was no official extension in place for reporting subaward data at this time. In addition, as of November 2009, OMB had not developed a specific plan for collecting and reporting subaward data. Not met OMB had not submitted the required annual report to Congress containing (1) data on the usage of and public feedback on the site, (2) an assessment of the reporting burden on award recipients, and (3) an explanation of any extension of the subaward deadline. According to OMB officials, it was gathering the necessary information and planned to issue a report in 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
It is important to ensure the transparency of information detailing how the federal government spends more than $1 trillion annually in the form of contracts, grants, loans, and other awards. Toward this end, the government has multiple initiatives under way to increase such transparency, including publicly accessible websites providing information on federal spending, such as http://www.USAspending.gov and http://www.Recovery.gov. While these efforts have increased the amount of information available, challenges have been identified to better ensure the quality of data on these sites. GAO was asked to provide a statement addressing (1) the status of efforts to improve the quality of publicly available data on government awards and expenditures and (2) lessons that can be learned from the operation of Recovery.gov that can contribute to other spending transparency efforts. In preparing this statement, GAO relied on its previous work in these areas, as well as discussions with OMB officials and officials from the Recovery Accountability and Transparency Board. The Office of Management and Budget (OMB) and other federal agencies have taken steps to improve federal spending data available on USAspending.gov. This effort to publicly display comprehensive data arose from the federal Funding Accountability and Transparency Act of 2006, which required OMB to establish a free, publicly accessible website containing data on federal awards and subawards. OMB launched USAspending.gov in December 2007 to meet these requirements. As GAO reported in 2010, while OMB had satisfied most of the requirements associated with the act, such as establishing the site with required data elements and search capability, it had only partially satisfied the requirement to establish a pilot program to test the collection and display of subaward data and had not met the requirements to include subaward data by January 2009, or to report to Congress on the site’s usage. Also, GAO found that from a sample of 100 awards on USAspending.gov, each award had at least one data error and that USAspending.gov did not include information on grants from programs at 9 agencies for fiscal year 2008. Subsequently, OMB and agencies have taken steps to improve the site and the quality of its data through increased agency-level accountability and government-wide improvements. These efforts include directing agencies to appoint a senior-level official to be accountable for the quality of federal spending information disseminated on public websites, and increasing the use of automated tools. However, OMB has not yet implemented plans to create a data quality dashboard on USAspending.gov and has produced only one of the required annual reports to Congress on usage of the site. OMB, the Recovery Accountability and Transparency Board, federal agencies, and funding recipients addressed several challenges in managing reporting under the American Recovery and Reinvestment Act of 2009. Recovery.gov was established in 2009 to provide public access to information on Recovery Act spending. Specifically, it was to provide timely information on projects or activities funded by federal grants, contracts, or loans provided to recipients, such as state or local governments. The transparency envisioned by the act was unprecedented for the federal government, and GAO identified a number of lessons learned from the operation of Recovery.gov: OMB and the Recovery board used two-way communication with recipients to refine and clarify guidance. Training and other assistance was provided to recipients to clarify reporting requirements and address early system problems. After early reporting and quality issues were identified, OMB required agencies to ensure data accuracy and completeness. Recipients made errors in reporting data, but these could be reduced through pre-populating data fields and other refinements to the reporting process. Recent legislative proposals and a newly created executive branch board aim to expand and improve upon the transparency of federal spending. The challenges and lessons learned from implementing the existing reporting tools should help inform current and future efforts. In particular, attention should be given to stakeholder involvement, the effort required for reporting and oversight, and the need for clear objectives and priorities. GAO previously made several recommendations to improve these transparency efforts, including that OMB clarify guidance on reporting award data and develop a procedure to ensure agencies report required information. While GAO is not making new recommendations at this time, it underscores the importance of fully implementing its prior recommendations.
SSA projects that its current data center will not be adequate to support the demands of its growing workload. In fiscal year 2008, SSA’s benefit programs provided a combined total of approximately $650 billion to nearly 55 million beneficiaries. According to the agency, the number of beneficiaries is estimated to increase substantially over the next decade. In addition, SSA’s systems contain large volumes of medical information, which is used in processing disability claims. About 15 million people are receiving federal disability payments, and SSA has been contending with backlogs in processing disability claims. According to SSA officials, the agency plans to use a large portion of the $1 billion in funding that it was allocated by the Recovery Act primarily to help build a large-scale data center and to develop new software to reduce the backlog of disability claims. The act provides $500 million from the stimulus package for data center expenses, of which $350 million is slated for the building infrastructure and part of the remaining funding for IT-related upgrades. This is not the entire projected cost: SSA has indicated that it needs a total of about $800 million to fund a new IT infrastructure, including the new data center—the physical building, power and cooling infrastructure, IT hardware, and systems applications. The Recovery Act’s goals, among other things, include creating or saving more than 3.5 million jobs over the next two years and encouraging renewable energy and energy conservation. According to the Office of Management and Budget (OMB), the act’s requirements include unprecedented levels of transparency, oversight, and accountability for various aspects of Recovery Act planning and implementation. These requirements are intended to ensure, among other things, that ● funds are awarded and distributed in a prompt, fair, and reasonable ● the recipients and uses of all funds are transparent to the public, and the public benefits of these funds are reported clearly, accurately, and in a timely manner; ● funds are used for authorized purposes and instances of fraud, waste, error, and abuse are mitigated; ● projects funded under the act avoid unnecessary delays and cost ● program goals are achieved, including specific program outcomes and improved results on broader economic indicators. An effort as central to SSA’s ability to carry out its mission as its planned new data center requires effective IT management. As our research and experience at federal agencies has shown, institutionalizing a set of interrelated IT management capabilities is key to an agency’s success in modernizing its IT systems. These capabilities include, but are not limited to ● strategic planning to describe an organization’s goals, the strategies it will use to achieve desired results, and performance measures; ● developing and using an agencywide enterprise architecture, or modernization blueprint, to guide and constrain IT investments; ● establishing and following a portfolio-based approach to investment implementing information security management that ensures the integrity and availability of information. The Congress has recognized in legislation the importance of these and other IT management controls, and OMB has issued guidance. We have observed that without these types of capabilities, organizations increase the risk that system modernization projects will (1) experience cost, schedule, and performance shortfalls and (2) lead to systems that are redundant and overlap. They also risk not achieving such aims as increased interoperability and effective information sharing. As a result, technology may not effectively and efficiently support agency mission performance and help realize strategic mission outcomes and goals. All these management capabilities have particular relevance to the data center initiative. ● IT strategic planning. A foundation for effective modernization, strategic planning is vital to create an agency’s IT vision or roadmap and help align its information resources with its business strategies and investment decisions. An IT strategic plan, which might include the mission of the agency, key business processes, IT challenges, and guiding principles, is important to enable an agency to consider the resources, including human, infrastructure, and funding, that are needed to manage, support, and pay for projects. For example, a strategic plan that identifies interdependencies within and across modernization projects helps ensure that these are understood and managed, so that projects—and thus system solutions—are effectively integrated. Given that the new data center is to form the backbone of SSA’s automated operations, it is important that the agency identify goals, resources, and dependencies in the context of its strategic vision. ● Enterprise architecture. An enterprise architecture consists of models that describe (in both business and technology terms) how an entity operates today and how it intends to operate in the future; it also includes a plan for transitioning to this future state. More specifically, it describes the enterprise in logical terms (such as interrelated business processes and business rules, information needs and flows, and work locations and users) as well as in technical terms (such as hardware, software, data, communications, and security attributes and performance standards). It provides these perspectives both for the enterprise’s current environment and for its target environment, as well as a transition plan for moving from one to the other. In short, it is a blueprint for organizational change. Using an enterprise architecture is important to help avoid developing operations and systems that are duplicative, not well integrated, unnecessarily costly to maintain and interface, and ineffective in supporting mission goals. Like an IT strategic plan (with which an enterprise architecture should be closely aligned), an enterprise architecture is an important tool to help SSA ensure that its data center initiative is successful. Using an enterprise architecture will help the agency ensure that the planning and implementation of the initiative take full account of the business and technology environment in which the data center and its systems are to operate. ● IT investment management. An agency should establish and follow a portfolio-based approach to investment management in which IT investments are selected, controlled, and monitored from an agencywide perspective. In this way, investment decisions are linked to an organization’s strategic objectives and business plans. Such an approach helps ensure that agencies allocate their resources effectively. In 2008, we evaluated SSA’s investment management approach and found that it was largely consistent with leading investment management practices. SSA had established most practices needed to manage its projects as investments; however it had not applied its process to all of its investments. For example, SSA had not applied its investment management process to a major portion of its IT budget. We recommended that for full accountability, SSA should manage its full IT development and acquisitions budget through its investment management board. We also made several recommendations for improving the evaluation of completed projects, including the use of quantitative measures of project success. Going forward, ensuring that best practices in investment management are applied to the data center initiative will help the agency effectively use funds appropriated under the Recovery Act. For example, projects funded under the act are to avoid unnecessary delays and cost overruns and are to achieve specific program outcomes and improved results on broader economic indicators. Robust investment management controls are important tools for achieving these goals. For example, developing accurate cost estimates—an important aspect of investment management— helps an agency evaluate resource requirements and increases the probability of program success. We have issued a cost estimating guide that provides best practices that agencies can use for developing and managing program cost estimates that are comprehensive, well-documented, accurate, and credible, and that provide management with a sound basis for establishing a baseline to formulate budgets and measure program performance. The guide also covers the use of earned value management (EVM), a technique for comparing the value of work accomplished in a given period with the value of the work expected. EVM metrics can alert program managers to potential problems sooner than tracking expenditures alone. Finally, the Recovery Act emphasizes the importance of energy efficiency and green building projects. Applying rigorous investment management controls to the planning and implementation of the data center design will help SSA determine the optimal approach to aligning its initiative with these goals. Because of the large power requirements and the heat generated by the equipment housed in data centers, efficient power and cooling are major concerns, particularly in light of evolving technology and increasing demand for information. To optimize their power and cooling requirements, agencies need to quantify cooling requirements and model these into data center designs. Such considerations affect the choice of locations for a new data center, facility requirements, and even floor space designs. Ways to improve energy efficiencies in data center facilities could include such cost-effective practices as reducing the need for artificial light by maximizing the use of natural light and insulating buildings more efficiently. For example, installing green (planted) roofs can insulate facilities and at the same time absorb carbon dioxide. ● Information security. For any organization that depends on information systems and computer networks to carry out its mission or business, information security is a critical consideration. It is especially important for government agencies like SSA, where maintaining the public’s trust is essential. Information security covers a wide range of controls, including general controls that apply across information systems (such as access controls and contingency planning) and business process application-specific controls to ensure the completeness, accuracy, validity, confidentiality, and availability of data. For the data center initiative, security planning and management will be important from the earliest stages of the project through the whole life cycle. In today’s environment, in which security threats are both domestic and international, operational and physical security is required to sustain the safety and reliability of the data center’s services on a day-to-day basis. An agency needs to have well-established security polices and practices in place and provide periodic assessments to ensure that the information and the facility are protected. Organizations must design and implement controls to detect and prevent unauthorized access to computer resources (e.g., data, programs, equipment, and facilities), thereby protecting them from unauthorized disclosure, modification, and loss. Specific access controls could include means to verify personnel identification and authorization. Further, because a data center is the backbone of an organization’s operations and service delivery, continuity of operations is a key concern. Data centers need to be designed with the ability to efficiently provide consistent processing of operations. Even slight disruptions in power can adversely affect service delivery. Data centers are vulnerable to a variety of service disruptions, including accidental file deletions, network failures, systems malfunctions, and disasters. In the design of a data center, continuity of operations needs to be addressed at every level—including applications, systems, and businesses. An agency needs to articulate, in a well defined plan, how it will process, retrieve, and protect electronically maintained information in the event of minor interruptions or a full- blown disaster. Disaster recovery plans should address all aspects of the recovery, including where to move personnel and how to maintain the business operations. Agency leaders need to prioritize business recovery procedures and to highlight the potential issues in such areas as application availability, data retention, speed of recovery, and network availability. ____________________________________________________________ In summary, given the projected increase in beneficiaries and the exceptional volume of medical data processed, these IT management capabilities will be imperative for SSA to follow as it pursues the complex data center initiative. Mr. Chairman, this completes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. If you should have any questions about this statement, please contact me at (202) 512-6304 or by e-mail at melvinv@gao.gov. Other individuals who made key contributions to this statement are Barbara Collier, Christie Motley, and Melissa Schermerhorn. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) provides resources to the Social Security Administration (SSA) to help replace its National Computer Center. This data center, which is 30 years old, houses the backbone of the agency's automated operations, which are critical to providing benefits to nearly 55 million people, issuing Social Security cards, and maintaining earnings records. The act makes $500 million available to SSA for the replacement of its National Computer Center and associated information technology (IT) costs. In this testimony, GAO was asked to comment on key IT management capabilities that will be important to the success of SSA's data center initiative. To do so, GAO relied on previously published products, including frameworks that it has developed for analyzing IT management areas. GAO has not performed a detailed examination of SSA's plans for this initiative, so it is not commenting on the agency's progress or making recommendations. For an effort as central to SSA's mission as its planned new data center, effective practices in key IT management areas are essential. For example: (1) Effective strategic planning helps an agency set priorities and decide how best to coordinate activities to achieve its goals. For example, a strategic plan identifying interdependencies among modernization project activities helps ensure that these are understood and managed, so that projects--and thus system solutions--are effectively integrated. Given that the new data center is to form the backbone of SSA's automated operations, it is important that the agency identify goals, resources, and dependencies in the context of its strategic vision. (2) An agency's enterprise architecture describes both its operations and the technology used to carry them out. A blueprint for organizational change, an architecture is defined in models that describe (in business and technology terms) an entity's current operation and planned future operation, as well as a plan for transitioning from one to the other. An enterprise architecture can help optimize SSA's data center initiative by ensuring that its planning and implementation take full account of the business and technology environment. (3) For IT investment management, an agency should follow a portfoliobased approach in which investments are selected, controlled, and monitored from an agencywide perspective. By helping to allocate resources effectively, robust investment management processes can help SSA meet the accountability requirements and align with the goals of the Recovery Act. For example, projects funded under the act are to avoid unnecessary delays and cost overruns and are to achieve specific program outcomes. Investment management is aimed at precisely such goals: for example, accurate cost estimating (an important aspect of investment management) provides a sound basis for establishing a baseline to formulate budgets and measure program performance. Further, the act emphasizes energy efficiency--also a major concern for data centers, which have high power and cooling requirements. Investment management tools are important for evaluating the most cost-effective approaches to energy efficiency. (4) Finally, information security should be considered throughout the planning, development, and implementation of the data center. Security is vital for any organization that depends on information systems and networks to carry out its mission--especially for government agencies like SSA, where maintaining the public's trust is essential. One part of information security management is contingency and continuity of operations planning--vital for a data center that is to be the backbone of SSA's operations and service delivery. Data centers are vulnerable to a variety of service disruptions, including accidental file deletions, network failures, systems malfunctions, and disasters. Accordingly, it is necessary to define plans governing how information will be processed, retrieved, and protected in the event of minor interruptions or a full-blown disaster. These capabilities will be important in helping to ensure that SSA's data center effort is successful and effectively uses Recovery Act funds.
The Department of the Navy (DON) is a major component of DOD, consisting of two uniformed services: the Navy and the Marine Corps. The Marine Corps’ primary mission is to serve as a “total force in readiness” by responding quickly in a wide spectrum of responsibilities, such as attacks from sea to land in support of naval operations, air combat, and security of naval bases. As the only service that operates in three dimensions—in the air, on land, and at sea, the Marine Corps must be equipped to provide rapid and precise logistics support to operating forces in any environment. The Marine Corps’ many and dispersed organization components rely heavily on IT to perform their respective mission-critical operations and related business functions, such as logistics and financial management. For fiscal year 2008, the Marine Corps budget for IT business systems is about $1.3 billion, of which $746 million (57 percent) is for operations and maintenance of existing systems and $553 million (43 percent) is for systems development and modernization. Of the approximately 904 systems in DON’s current inventory, the Marine Corps accounts for 81, or about 9 percent, of the total. The GCSS-MC is one such system investment. According to DOD, it is intended to address the Marine Corps’ long- standing problem of stove-piped logistics systems that collectively provide limited data visibility and access, are unable to present a common, integrated logistics picture in support of the warfighter, and do not provide important decision support tools. In September 2003, the Marine Corps initiated GCSS-MC to (1) deliver integrated functionality across the logistics areas (e.g., supply and maintenance), (2) provide timely and complete logistics information to authorized users for decision making, and (3) provide access to logistics information and applications regardless of location. The system is intended to function in three operational environments—deployed operations (i.e., in theater of war or exercise environment on land or at sea), in-transit, and in garrison. When GCSS-MC is fully implemented, it is to support about 33,000 users located around the world. GCSS-MC is being developed in a series of large and complex increments using commercially available enterprise resource planning (ERP) software and hardware components. The first increment is currently the only funded portion of the program and is to provide a range of asset management capabilities, including planning inventory requirements to support current and future requesting and tracking the status of products (e.g., supplies and personnel) and services (e.g., maintenance and engineering); allocating resources (e.g., inventory, warehouse capacity, and personnel) to support unit demands for specific products; and scheduling maintenance resources (e.g., manpower, equipment, and supplies) for specific assets, such as vehicles. Additionally, the first increment is to replace four legacy systems scheduled for retirement in 2010. Table 1 describes these four systems. Future increments are to provide additional functionality (e.g., transportation and wholesale inventory management), enhance existing functionality, and potentially replace up to 44 additional legacy systems. The program office estimates the total life cycle cost for the first increment to be about $442 million, including $169 million for acquisition and $273 million for operations and maintenance. The total life cycle cost of the entire program has not yet been determined because future increments are currently in the planning stages and have not been defined. As of April 2008, the program office reported that approximately $125 million has been spent on the first increment. To manage the acquisition and deployment of GCSS-MC, the Marine Corps established a program management office within the Program Executive Office for Executive Information Systems. The program office is led by the Program Manager who is responsible for managing the program’s scope and funding and ensuring that the program meets its objectives. To accomplish this, the program office is responsible for key acquisition management controls, such as architectural alignment, economic justification, EVM, requirements management, risk management, and system quality measurement. In addition, various DOD and DON organizations share program oversight and review activities relative to these and other acquisition management controls. A listing of key entities and their roles and responsibilities is in table 2. The program reports that the first increment of GCSS-MC is currently in the system development and demonstration phase of the defense acquisition system (DAS). The DAS consists of five key program life cycle phases and three related milestone decision points. These five phases and related milestones are described along with a summary of key program activities completed during, or planned, for each phase as follows: 1. Concept refinement: The purpose of this phase is to refine the initial system solution (concept) and create a strategy for acquiring the investment solution. During this phase, the program office defined the acquisition strategy and analyzed alternative solutions. The first increment completed this phase on July 23, 2004, which was 1 month later than planned, and the MDA approved a Milestone A decision to move to the next phase. 2. Technology development: The purpose of this phase is to determine the appropriate set of technologies to be integrated into the investment solution by iteratively assessing the viability of various technologies while simultaneously refining user requirements. During this phase, the program office selected Oracle’s E-Business Suite as the commercial off-the-shelf ERP software. In addition, the program office awarded Accenture the system integration contract to, among other things, configure the software, establish system interfaces, and implement the new system. This system integration contract was divided into two phases—Part 1 for the planning, analysis, and conceptual design of the solution and Part 2 for detailed design, build, test, and deployment of the solution. The program office did not exercise the option for Part 2 of the contract to Accenture and shortly thereafter established a new program baseline in June 2006. In November 2006, it awarded a time-and-materials system integration contract valued at $28.4 million for solution design to Oracle. The first increment completed this phase on June 8, 2007, which was 25 months later than planned due in part to contractual performance shortfalls, and the MDA approved a Milestone B decision to move to the next phase. 3. System development and demonstration: The purpose of this phase is to develop the system and demonstrate through developer testing that the system can function in its target environment. During this phase, the program office extended the solution design contract and increased funding to $67.5 million due, in part, to delays in completing the detailed design activities. As a result, the program office has not yet awarded the next contract (which includes both firm-fixed-price and time-and-materials task orders) for build and testing activities, originally planned for July 2007. Instead, it entered what it termed a “transition period” to complete detailed design activities. According to the program’s baseline, the MDA is expected to approve a Milestone C decision to move to the next phase in October 2008. However, program officials stated that Milestone C is now scheduled for April 2009, which is 35 months later than originally planned. 4. Production and deployment: The purpose of this phase is to achieve an operational capability that satisfies the mission needs, as verified through independent operational test and evaluation, and implement the system at all applicable locations. The program office plans to award a separate firm-fixed-price plus award fee contract for these activities with estimated costs yet to be determined. 5. Operations and support: The purpose of this phase is to operationally sustain the system in the most cost-effective manner over its life cycle. The details of this phase have not yet been defined. Overall, GCSS-MC was originally planned to reach full operational capability (FOC) in fiscal year 2007 at an estimated cost of about $126 million over a 7-year life cycle. This cost estimate was later revised in 2005 to about $249 million over a 13-year life cycle. However, the program now expects to reach FOC in fiscal year 2010 at a cost of about $442 million over a 12-year life cycle. Figures 1 and 2 show the program’s current status against original milestones and original, revised, and current cost estimates. Acquisition best practices are tried and proven methods, processes, techniques, and activities that organizations define and use to minimize program risks and maximize the chances of a program’s success. Using best practices can result in better outcomes, including cost savings, improved service and product quality, and a better return on investment. For example, two software engineering analyses of nearly 200 systems acquisition projects indicate that teams using systems acquisition best practices produced cost savings of at least 11 percent over similar projects conducted by teams that did not employ the kind of rigor and discipline embedded in these practices. In addition, our research shows that best practices are a significant factor in successful acquisition outcomes and increase the likelihood that programs and projects will be executed within cost and schedule estimates. We and others have identified and promoted the use of a number of best practices associated with acquiring IT systems. See table 3 for a description of several of these activities. We have previously reported that DOD has not effectively managed a number of business system investments. Among other things, our reviews of individual system investments have identified weaknesses in such areas as architectural alignment and informed investment decision making, which are also the focus areas of the Fiscal Year 2005 National Defense Authorization Act business system provisions. Our reviews have also identified weaknesses in other system acquisition and investment management areas—such as EVM, economic justification, requirements management, risk management, and test management. Most recently, for example, we reported that the Army’s approach to investing about $5 billion over the next several years in its General Fund Enterprise Business System, Global Combat Support System-Army Field/Tactical, and Logistics Modernization Program did not include alignment with Army enterprise architecture or use a portfolio-based business system investment review process. Moreover, we reported that the Army did not have reliable analyses, such as economic analyses, to support its management of these programs. We concluded that until the Army adopts a business system investment management approach that provides for reviewing groups of systems and making enterprise decisions on how these groups will collectively interoperate to provide a desired capability, it runs the risk of investing significant resources in business systems that do not provide the desired functionality and efficiency. Accordingly, we made recommendations aimed at improving the department’s efforts to achieve total asset visibility and enhancing its efforts to improve its control and accountability over business system investments. The department agreed with our recommendations. We also reported that DON had not, among other things, economically justified its ongoing and planned investment in the Naval Tactical Command Support System (NTCSS) and had not invested in NTCSS within the context of a well-defined DOD or DON enterprise architecture. In addition, we reported that DON had not effectively performed key measurement, reporting, budgeting, and oversight activities and had not adequately conducted requirements management and testing activities. We concluded that, without this information, DON could not determine whether NTCSS, as defined, and as being developed, is the right solution to meet its strategic business and technological needs. Accordingly, we recommended that the department develop the analytical basis to determine if continued investment in the NTCSS represents prudent use of limited resources and to strengthen management of the program, conditional upon a decision to proceed with further investment in the program. The department largely agreed with these recommendations. In addition, we reported that the Army had not defined and developed its Transportation Coordinators’ Automated Information for Movements System II (TC-AIMS II)—a joint services system with the goal of helping to manage the movement of forces and equipment within the United States and abroad—in the context of a DOD enterprise architecture. We also reported that the Army had not economically justified the program on the basis of reliable estimates of life cycle costs and benefits and had not effectively implemented risk management. As a result, we concluded that the Army did not know if its investment in TC-AIMS II, as planned, is warranted or represents a prudent use of limited DOD resources. Accordingly, we recommended that DOD, among other things, develop the analytical basis needed to determine if continued investment in TC-AIMS II, as planned, represents prudent use of limited defense resources. In response, the department largely agreed with our recommendations and has since reduced the program’s scope by canceling planned investments. DOD IT-related acquisition policies and guidance, along with other relevant guidance, provide an acquisition management control framework within which to manage business system programs like GCSS-MC. Effective implementation of this framework can minimize program risks and better ensure that system investments are defined in a way to optimally support mission operations and performance, as well as deliver promised system capabilities and benefits on time and within budget. Thus far, GCSS-MC has not been managed in accordance with key aspects of this framework, which has already contributed to more than 3 years in program schedule delays and about $193 million in cost increases. These IT acquisition management control weaknesses include compliance with DOD’s federated BEA not being sufficiently expected costs not being reliably estimated; earned value management not being adequately implemented; system requirements not always being effectively managed, although this has recently improved; key program risks not being effectively managed; and key system quality measures not being used. The reasons that these key practices have not been sufficiently executed include limitations in the applicable DOD guidance and tools, and not collecting relevant data, each of which is described in the applicable sections of this report. By not effectively implementing these key IT acquisition management controls, the program has already experienced sizeable schedule and cost increases, and it is at increased risk of (1) not being defined in a way that best meets corporate mission needs and enhances performance and (2) costing more and taking longer than necessary to complete. DOD and federal guidance recognize the importance of investing in business systems within the context of an enterprise architecture. Moreover, the 2005 National Defense Authorization Act requires that defense business systems be compliant with DOD’s federated BEA. Our research and experience in reviewing federal agencies show that not making investments within the context of a well-defined enterprise architecture often results in systems that are duplicative, are not well integrated, are unnecessarily costly to interface and maintain, and do not optimally support mission outcomes. To its credit, the program office has followed DOD’s BEA compliance guidance. However, this guidance does not adequately provide for addressing all relevant aspects of BEA compliance. Moreover, DON’s enterprise architecture, which is a major component of DOD’s federated BEA, as well as key aspects of DOD’s corporate BEA, have yet to be sufficiently defined to permit thorough compliance determinations. In addition, current policies and guidance do not require DON investments to comply with its enterprise architecture. This means that the department does not have a sufficient basis for knowing if GCSS-MC has been defined to optimize DON and DOD business operations. Each of these architecture alignment limitations is discussed as follows: The program’s compliance assessments did not include all relevant architecture products. In particular, the program did not assess compliance with the BEA’s technical standards profile, which outlines, for example, the standards governing how systems physically communicate with other systems and how they secure data from unauthorized access. This is particularly important because systems, like GCSS-MC, need to employ common standards in order to effectively and efficiently share information with other systems. A case in point is GCSS-MC and the Navy Enterprise Resource Planning program. Specifically, GCSS-MC has identified 13 technical standards that are not in the BEA technical standards profile, and Navy Enterprise Resource Planning has identified 25 technical standards that are not in the profile. Of these, some relate to networking protocols, which could limit information sharing between these and other systems. In addition, the program office did not assess compliance with the BEA products that describe system characteristics. This is important because doing so would create a body of information about programs that could be used to identify common system components and services that could potentially be shared by the programs, thus avoiding wasteful duplication. For example, our analysis of GCSS-MC program documentation shows that they contain such system functions as receiving goods, taking physical inventories, and returning goods, which are also system functions cited by the Navy Enterprise Resource Planning program. However, because compliance with the BEA system products was not assessed, the extent to which these functions are potentially duplicative was not considered. Furthermore, the program office did not assess compliance with BEA system products that describe data exchanges among systems. As we previously reported, establishing and using standard system interfaces is a critical enabler to sharing data. For example, GCSS-MC program documentation indicates that it is to exchange order and status data with other systems. However, the program office has not fully developed its architecture product describing these exchanges and thus does not have the basis for understanding how its approach to exchanging information differs from that of other systems that it is to interface with. Compliance against each of these BEA products was not assessed because DOD’s compliance guidance does not provide for doing so and, according to BTA and program officials, some BEA and program-level architecture products are not sufficiently defined. According to these officials, BTA plans to continue to define these products as the BEA evolves. The compliance assessment was not used to identify potential areas of duplication across programs, which DOD has stated is an explicit goal of its federated BEA and associated investment review and decision-making processes. More specifically, even though the compliance guidance provides for assessing programs’ compliance with the BEA product that defines DOD operational activities, and GCSS-MC was assessed for compliance with this product, the results were not used to identify programs that support the same operational activities and related business processes. Given that the federated BEA is intended to identify and avoid not only duplications within DOD components, but also between DOD components, it is important that such commonality be addressed. For example, program-level architecture products for GCSS-MC and Navy Enterprise Resource Planning, as well as two Air Force programs (Defense Enterprise Accounting and Management System-Air Force and the Air Force Expeditionary Combat Support System) show that each supports at least six of the same BEA operational activities (e.g., conducting physical inventory, delivering property, and services) and three of these four programs support at least 18 additional operational activities (e.g., performing budgeting, managing receipt, and acceptance). As a result, these programs may be investing in duplicative functionality. Reasons for not doing so were that compliance guidance does not provide for such analyses to be conducted, and programs have not been granted access rights to use this functionality. The program’s compliance assessment did not address compliance against the DON’s enterprise architecture, which is one of the biggest members of the federated BEA. This is particularly important given that DOD’s approach to fully satisfying the architecture requirements of the 2005 National Defense Authorization Act is to develop and use a federated architecture in which component architectures are to provide the additional details needed to supplement the thin layer of corporate policies, rules, and standards included in the corporate BEA. As we recently reported, the DON’s enterprise architecture is not mature because, among other things, it is missing a sufficient description of its current and future environments in terms of business and information/data. However, certain aspects of an architecture nevertheless exist and, according to DON, these aspects will be leveraged in its efforts to develop a complete enterprise architecture. For example, the FORCEnet architecture documents DON’s technical infrastructure. Therefore, opportunities exist for DON to assess its programs in relation to these architecture products and to understand where its programs are exposed to risks because products do not exist, are not mature, or are at odds with other DON programs. According to DOD officials, compliance with the DON architecture was not assessed because DOD compliance policy is limited to compliance with the corporate BEA, and the DON enterprise architecture has yet to be sufficiently developed. The program’s compliance assessment was not validated by DOD or DON investment oversight and decision-making authorities. More specifically, neither the DOD IRBs nor the DBSMC, nor the BTA in supporting both of these investment oversight and decision-making authorities, reviewed the program’s assessments. According to BTA officials, under DOD’s tiered approach to investment accountability, these entities are not responsible for validating programs’ compliance assessments. Rather, this is a component responsibility, and thus they rely on the military departments and defense agencies to validate the assessments. However, the DON Office of the CIO, which is responsible for precertifying investments as compliant before they are reviewed by the IRB, did not evaluate any of the programs’ compliance assessments. According to CIO officials, they rely on Functional Area Managers to validate a program’s compliance assessments. However, no DON policy or guidance exists that describes how the Functional Area Managers should conduct such validations. Validation of program assessments is further complicated by the absence of information captured in the assessment tool about what program documentation or other source materials were used by the program office in making its compliance determinations. Specifically, the tool is only configured, and thus was only used, to capture the results of a program’s comparison of program architecture products to BEA products. Thus, it was not used to capture the system products used in making these determinations. In addition, the program office did not develop certain program-level architecture products that are needed to support and validate the program’s compliance assessment and assertions. According to the compliance guidance, program-level architecture products, such as those defining information exchanges and system data requirements are not required to be used until after the system has been deployed. This is important because waiting until the system is deployed is too late to avoid the costly rework required to address areas of noncompliance. Moreover, it is not consistent with other DOD guidance, which states that program- level architecture products that describe, for example, information exchanges, should be developed before a program begins system development. The limitations in existing BEA compliance-related policy and guidance, the supporting compliance assessment tool, and the federated BEA, puts programs like GCSS-MC at increased risk of being defined and implemented in a way that does not sufficiently ensure interoperability and avoid duplication and overlap. We currently have a review under way for the Senate Armed Services Committee, Subcommittee on Readiness and Management Support, which is examining multiple programs’ compliance with the federated BEA. The investment in the first increment of GCSS-MC has not been economically justified on the basis of reliable analyses of estimated system costs over the life of the program. According to the program’s economic analysis, the first increment will have an estimated life cycle cost of about $442 million and deliver an estimated $1.04 billion in risk-adjusted estimated benefits during this same life cycle. This equates to a net present value of about $688 million. While the most recent cost estimate was derived using some effective estimating practices, it did not make use of other practices that are essential to having an accurate and credible estimate. As a result, the Marine Corps does not have a sufficient basis for deciding whether GCSS-MC, as defined, is the most cost-effective solution to meeting its mission needs, and it does not have a reliable basis against which to measure cost performance. A reliable cost estimate is critical to the success of any IT program, as it provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction, and accountability for results. According to the Office of Management and Budget (OMB), programs must maintain current and well-documented cost estimates, and these estimates must encompass the full life cycle of the program. OMB states that generating reliable cost estimates is a critical function necessary to support OMB’s capital programming process. Without reliable estimates, programs are at increased risk of experiencing cost overruns, missed deadlines, and performance shortfalls. Our research has identified a number of practices for effective program cost estimating. We have issued guidance that associates these practices with four characteristics of a reliable cost estimate. These four characteristic are specifically defined as follows: Comprehensive: The cost estimates should include both government and contractor costs over the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance, to retirement. They should also provide a level of detail appropriate to ensure that cost elements are neither omitted nor double counted and include documentation of all cost-influencing ground rules and assumptions. Well-documented: The cost estimates should have clearly defined purposes and be supported by documented descriptions of key program or system characteristics (e.g., relationships with other systems, performance parameters). Additionally, they should capture in writing such things as the source data used and their significance, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against, their sources. The final cost estimate should be reviewed and accepted by management on the basis of confidence in the estimating process and the estimate produced by the process. Accurate: The cost estimates should provide for results that are unbiased and should not be overly conservative or optimistic (i.e., should represent the most likely costs). In addition, the estimates should be updated regularly to reflect material changes in the program, and steps should be taken to minimize mathematical mistakes and their significance. The estimates should also be grounded in a historical record of cost estimating and actual experiences on comparable programs. Credible: The cost estimates should discuss any limitations in the analysis performed that are due to uncertainty or biases surrounding data or assumptions. Further, the estimates’ derivation should provide for varying any major assumptions and recalculating outcomes based on sensitivity analyses, and the estimates’ associated risks and inherent uncertainty should be disclosed. Also, the estimates should be verified based on cross- checks using other estimating methods and by comparing the results with independent cost estimates. The $442 million life cycle cost estimate for the first increment reflects many of the practices associated with a reliable cost estimate, including all practices associated with being comprehensive and well-documented, and several related to being accurate and credible. (See table 4.) However, several important accuracy and credibility practices were not satisfied. The cost estimate is comprehensive because it includes both the government and contractor costs specific to development, acquisition (nondevelopment), implementation, and operations and support over the program’s 12-year life cycle. Moreover, the estimate clearly describes how the various subelements are summed to produce the amounts for each cost category, thereby ensuring that all pertinent costs are included, and no costs are double counted. Lastly, cost-influencing ground rules and assumptions, such as the program’s schedule, labor rates, and inflation rates are documented. The cost estimate is also well-documented in that the purpose of the cost estimate was clearly defined, and a technical baseline has been documented that includes, among others things, the relationships with other systems and planned performance parameters. Furthermore, the calculations and results used to derive the estimate are documented, including descriptions of the methodologies used and traceability back to source data (e.g., vendor quotes, salary tables). Also, the cost estimate was reviewed both by the Naval Center for Cost Analysis and the Office of the Secretary of Defense, Director for Program Analysis and Evaluation, which ensures a level of confidence in the estimating process and the estimate produced. However, the estimate lacks accuracy because not all important practices related to this characteristic were satisfied. Specifically, while the estimate is grounded in documented assumptions (e.g., hardware refreshment every 5 years), and periodically updated to reflect changes to the program, it is not grounded in historical experience with comparable programs. As stated in our guide, estimates should be based on historical records of cost and schedule estimates from comparable programs, and such historical data should be maintained and used for evaluation purposes and future estimates on other comparable programs. The importance of doing so is evident by the fact that GCSS-MC’s cost estimate has increased by about $193 million since July 2005, which program officials attributed to, among other things, schedule delays, software development complexity, and the lack of historical data from similar ERP programs. While the program office did leverage historical cost data from other ERP programs, including the Navy’s Enterprise Resource Planning Pilot programs and programs at the Bureau of Prisons and the Department of Agriculture, program officials told us that these programs’ scopes were not comparable. For example, none of the programs had to utilize a communication architecture as complex as the Marine Corps, which officials cited as a significant factor in the cost increases and a challenge in estimating costs. The absence of analogous cost data for large-scale ERP programs is due in part to the fact that DOD has not established a standardized cost element structure for ERP programs that can be used to capture actual cost data. According to officials with the Defense Cost and Resource Center, such cost element structures are needed, along with a requirement for programs to report on their costs, but approval and resources have yet to be gained for either these structures or the reporting of their costs. Until a standardized data structure exists, programs like GCSS-MC will continue to lack a historical database containing cost estimates and actual cost experiences of comparable ERP programs. This means that the current and future GCSS-MC cost estimates will lack sufficient accuracy for effective investment decision making and performance measurement. Compounding the estimate’s limited accuracy are limitations in its credibility. Specifically, while the estimate satisfies some of the key practices for a credible cost estimate (e.g., confirming key cost drivers, performing sensitivity analyses, having an independent cost estimate prepared by the Naval Center for Cost Analysis that was within 4 percent of the program’s estimate, and conducting a risk analysis that showed a range of estimated costs of $411 million to $523 million), no risk analysis was performed to determine the program schedule’s risks and associated impact on the cost estimate. As described earlier in this report, the program has experienced about 3 years in schedule delays and recently experienced delays in completing the solution design phase. Therefore, the importance of conducting a schedule risk analysis and using the results to assess the variability in the cost estimate is critical for ensuring a credible cost estimate. Program officials agreed that the program’s schedule is aggressive and risky and that this risk was not assessed in determining the cost estimate’s variability. Without doing so, the program’s cost estimate is not credible, and thus the program is at risk of cost overruns as a result of schedule delays. Forecasting expected benefits over the life of a program is also a key aspect of economically justifying an investment. OMB guidance advocates economically justifying investments on the basis of net present value. If net present value is positive, then the corresponding benefit-to- cost ratio will be greater than 1 (and vice versa). This guidance also advocates updating the analyses over the life of the program to reflect material changes in expected benefits, costs, and risks. Since estimates of benefits can be uncertain because of the imprecision in both the underlying data and modeling assumptions used, effects of this uncertainty should be analyzed and reported. By doing this, informed investment decision making can occur through the life of the program, and a baseline can be established against which to compare the accrual of actual benefits from deployed system capabilities. The original benefit estimate for the first increment was based on questionable assumptions and insufficient data from comparable programs. The most recent economic analysis, dated January 2007, includes monetized, yearly benefit estimates for fiscal years 2010–2019 in three key areas—inventory reductions, reductions in inventory carrying costs, and improvements in maintenance processes. Collectively, these benefits totaled about $2.89 billion (not risk-adjusted). However, these calculations were made using questionable assumptions and limited data. For example, The total value of the Marine Corps inventory needed to calculate inventory reductions and reductions in carrying costs could not be determined because of limitations with existing logistic systems. The cost savings resulting from improvements in maintenance processes were calculated based on assumptions from an ERP implementation in the commercial sector that, according to program officials, is not comparable in scope to GCSS-MC. To account for the uncertainty inherent in the benefits estimate, the program office performed a Monte Carlo simulation. According to the program office, this risk analysis generated a discounted and risk-adjusted benefits estimate of $1.04 billion. As a result of the $1.85 billion adjustment to estimated benefits, the program office has a more realistic benefit baseline against which to compare the accrual of actual benefits from deployed system capabilities. The program office has elected to implement EVM, which is a proven means for measuring program progress and thereby identifying potential cost overruns and schedule delays early, when they can be minimized. In doing so, it has adopted a tailored EVM approach that focuses on schedule. However, this schedule-focused approach has not been effectively implemented because it is based on a baseline schedule that was not derived using key schedule estimating practices. According to program officials, the schedule was driven by an aggressive program completion date established in response to direction from oversight entities to complete the program as soon as possible. As a result, they said that following these practices would have delayed this completion date. Regardless, this means that the schedule baseline is not reliable, and progress will likely not track to the schedule. The program office has adopted a tailored approach to performing EVM because of the contract type being used. As noted earlier, the contract types associated with GCSS-MC integration and implementation vary, and include, for example, firm-fixed-price contracts and time-and-materials contracts. Under a firm-fixed-price contract, the price is not subject to any adjustment on the basis of the contractor’s cost experience in performing the contract. For a time-and-materials contract, supplies or services are acquired on the basis of (1) an undefined number of direct labor hours that are paid at specified fixed hourly rates and (2) actual cost for materials. According to DOD guidance, EVM is generally not encouraged for firm- fixed-price, level of effort, and time-and-material contracts. In these situations, the guidance states that programs can use a tailored EVM approach in which an integrated master schedule (IMS) is exclusively used to provide visibility into program performance. DON has chosen to implement this tailored EVM approach on GCSS-MC. In doing so, it is measuring progress against schedule commitments, and not cost commitments, using an IMS for each program phase. According to program officials, the IMS describes and guides the execution of program activities. Regardless of the approach used, effective implementation depends on having a reliable IMS. The success of any program depends in part on having a reliable schedule specifying when the program’s set of work activities will occur, how long they will take, and how they are related to one another. As such, the schedule not only provides a road map for the systematic execution of a program, but it also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. Our research has identified nine practices associated with effective schedule estimating. These practices are (1) capturing key activities, (2) sequencing key activities, (3) assigning resources to key activities, (4) integrating key activities horizontally and vertically, (5) establishing the duration of key activities, (6) establishing the critical path for key activities, (7) identifying “float time” between key activities, (8) distributing reserves to high-risk activities, and (9) performing a schedule risk analysis. The current IMS for the solution design and transition-to-build phase of the first increment was developed using some of these practices. However, it does not reflect several practices that are fundamental to having a schedule baseline that provides a sufficiently reliable basis for measuring progress and forecasting slippages. To the program office’s credit, its IMS captures and sequences key activities required to complete the project, integrates the tasks horizontally, and identifies the program’s critical path. However, the program office is not monitoring the actual durations of scheduled activities so that it can address the impact of any deviations on later scheduled activities. Moreover, the schedule does not adequately identify the resources needed to complete the tasks and is not integrated vertically, meaning that multiple teams executing different aspects of the program cannot effectively work to the same master schedule. Further, the IMS does not adequately mitigate schedule risk by identifying float time between key activities, introducing schedule reserve for high-risk activities, or including the results of a schedule risk analysis. See table 5 for the results of our analyses relative to each of the nine practices. According to program officials, they intend to begin monitoring actual activity start and completion dates so that they can proactively adjust later scheduled activities that are affected by deviations. However, they do not plan to perform the three practices related to understanding and managing schedule risk because doing so would likely extend the program’s completion date, and they set this date to be responsive to direction from DOD and DON oversight entities to complete the program as soon as possible. In our view, not performing these practices does not allow the inherent risks in meeting this imposed completion date to be proactively understood and addressed. The consequence of omitting these practices is a schedule that does not provide a reliable basis for performing EVM. Well-defined and managed requirements are recognized by DOD guidance as essential and can be viewed as a cornerstone of effective system acquisition. One aspect of effective requirements management is requirements traceability. By tracing requirements both backward from system requirements to higher level business or operational requirements and forward to system design specifications and test plans, the chances of the deployed product satisfying requirements are increased, and the ability to understand the impact of any requirement changes, and thus make informed decision about such changes, is enhanced. The program office recently strengthened its requirements traceability. In November 2007, and again in February 2008, the program office was unable to demonstrate for us that it could adequately trace its 1,375 system requirements to both design specifications and test documentation. Specifically, the program office was at that time using a tool called DOORS®, which if implemented properly, allows each requirement to be linked from its most conceptual definition to its most detailed definition, as well as to design specifications and test cases. In effect, the tool maintains the linkages among requirement documents, design documents, and test cases even if requirements change. However, the system integration contractor was not using the tool. Instead the contractor was submitting its 244 work products, accompanied by spreadsheets that linked each work product to one or more system requirements and test cases. The program office then had to verify and validate the spreadsheets and import and link each work product to the corresponding requirement and test case in DOORS. Because of the sheer number of requirements and work products and its potential to impact cost, schedule, and performance, the program designated this approach as a medium risk. It later closed the risk because the proposed mitigation strategy failed to mitigate it, and it was realized as a high-priority program issue (i.e., problem). According to program officials, this requirements traceability approach resulted in time-consuming delays in approving the design work products and importing and establishing links between these products and the requirements in DOORS, in part because the work products were not accompanied by complete spreadsheets that established the traceability. As a result, about 30 percent of the contractor’s work products had yet to be validated, approved, and linked to requirements when the design phase was originally scheduled to be complete. Officials stated that the contractor was not required to use DOORS because it was not experienced with this tool and becoming proficient with it would have required time and resources, thereby increasing both the program’s cost and schedule. Ironically, however, not investing the time and resources to address the limitations in the program’s traceability approach contributed to recent delays in completing the solution design activities, and additional resources had to be invested to address its requirements traceability problems. The program office now reports that it can trace requirements backward and forward. In April 2008, we verified this by tracing 60 out of 61 randomly sampled requirements backward to system requirements and forward to approved design specifications and test plans. Program officials explained that the reason that we could not trace the one requirement was that the related work products had not yet been approved. In addition, they stated that there were additional work products that had yet to be finalized and traced. Without adequate traceability, the risk of a system not performing as intended and requiring expensive rework is increased. To address its requirements traceability weakness, program officials told us that they now intend to require the contractor to use DOORS during the next phase of the program (build and test). If implemented effectively, the new process should address previous requirements traceability weaknesses and thereby avoid a repeat of past problems. Proactively managing program risks is a key acquisition management control and, if done properly, can greatly increase the chances of programs delivering promised capabilities and benefits on time and within budget. To the program office’s credit, it has defined a risk management process that meets relevant guidance. However, it has not effectively implemented the process for all identified risks. As a result, these risks have become actual program problems that have impacted the program’s cost, schedule, and performance commitments. DOD acquisition management guidance, as well as other relevant guidance, advocates identifying facts and circumstances that can increase the probability of an acquisition’s failing to meet cost, schedule, and performance commitments and then taking steps to reduce the probability of their occurrence and impact. In brief, effective risk management consists of: (1) establishing a written plan for managing risks; (2) designating responsibility for risk management activities; (3) encouraging project-wide participation in the identification and mitigation of risks; (4) defining and implementing a process that provides for the identification, analysis, and mitigation of risks; and (5) examining the status of identified risks in program milestone reviews. The program office has developed a written plan for managing risks, and established a process that together provide for the above cited risk management practices, and it has followed many key aspects of its plan and process. For example, The Program Manager has been assigned overall responsibility for managing risks. Also, individuals have been assigned ownership of each risk, to include conducting risk analyses, implementing mitigation strategies, and working with the risk support team. The plan and process encourage project-wide participation in the identification and mitigation of risks by allowing program staff to submit a risk for inclusion in a risk database and take ownership of the risk and the strategy for mitigating it. In addition, stakeholders can bring potential risks to the Program Manager’s attention through interviews, where potential risks are considered and evaluated. The program office has thus far identified and categorized individual risks. As of December 2007, the risk database contained 27 active risks—2 high, 15 medium, and 10 low. Program risks are considered during program milestone reviews. Specifically, our review of documentation for the Design Readiness Review, a key decision point during the system development and demonstration phase leading up to a Milestone C decision, showed that key risks were discussed. Furthermore, the Program Manager reviews program risks’ status through a risk watch list and bimonthly risk briefings. However, the program office has not consistently followed other aspects of its process. For example, it did not perform key practices for identifying and managing schedule risks, such as conducting a schedule risk assessment and building in reserve time to its schedule. In addition, mitigation steps for several key risks were either not performed in accordance with the risk management strategy, or risks that were closed as having been mitigated were later found to be actual program issues (i.e., problems). For 25 medium risks in the closed risk database, as of February 2008, 4 were closed because mitigation steps were not performed in accordance with the strategy and the risks ultimately became actual issues. Examples from these medium risks are as follows: In one case, the mitigation strategy was for the contractor to deliver certain design documents that were traced to system requirements and to do so before beginning the solution build phase. The design documents, however, were not received in accordance with the mitigation strategy. Specifically, program officials told us that the design documents contained inaccuracies or misinterpretations of the requirements and were not completed on time because of the lack of resources to correct these problems. As a result, the program experienced delays in completing its solution design activities. In another case, the mitigation strategy included creating the documentation needed to execute the contract for monitoring the build phase activities. However, the mitigation steps were not performed due to, among other things, delays in approving the contractual approach. As a result, the risk became a high-priority issue in February 2008. According to a program issue report, the lack of a contract to monitor system development progress may result in unnecessary rework and thus additional program cost overruns, schedule delays, and performance shortfalls. Four of the same 25 medium risks were retired because key mitigation steps for each one were implemented, but the strategies proved ineffective, and the risks became actual program issues. Included in these 4 risks were the following: In one case, the program office closed a risk regarding data exchange with another DON system because key mitigation steps to establish exchange requirements were fully implemented. However, in February 2008, a high- priority issue was identified regarding the exchange of data with this system. According to program officials, the risk was mitigated to the fullest extent possible and closed based on the understanding that continued evaluation of data exchange requirements would be needed. However, because the risk was retired, this evaluation did not occur. In another case, a requirements management risk was closed on the basis of having implemented mitigation steps, which involved establishing a requirements management process, including having complete requirements traceability spreadsheets. However, although several of the mitigation steps were not fully implemented, the risk was closed on the basis of what program officials described as an understanding reached with the contractor regarding the requirements management process. Several months later, a high-priority issue concerning requirements traceability was identified because the program office discovered that the contractor was not adhering to the understanding. Unless risk mitigation strategies are monitored to ensure that they are fully implemented and that they produce the intended outcomes, and additional mitigation steps are taken when they are not, the program office will continue to be challenged in preventing risks from developing into actual cost, schedule, and performance problems. Effective management of programs like GCSS-MC depends in part on the ability to measure the quality of the system being acquired and implemented. Two measures of system quality are trends in (1) the number of unresolved severe system defects and (2) the number of unaddressed high-priority system change requests. GCSS-MC documentation recognizes the importance of monitoring such trends. Moreover, the program office has established processes for (1) collecting and tracking data on the status of program issues, including problems discovered during early test events, and (2) capturing data on the status of requests for changes to the system. However, its processes do not provide the full complement of data that are needed to generate a reliable and meaningful picture of trends in these areas. In particular, data on problems and change request priority levels and closure dates are either not captured or not consistently maintained. Further, program office oversight of contractor-identified issues or defects is limited. Program officials acknowledged these data limitations, but they stated that oversight of contractor-identified issues is not their responsibility. Without tracking trends in key indicators, the program office cannot adequately understand and report to DOD decision makers whether GCSS-MC’s quality and stability are moving in the right direction. Program guidance and related best practices encourage trend analysis and the reporting of system defects and program problems as measures or indicators of system quality and program maturity. As we have previously reported, these indicators include trends in the number of unresolved problems according to their significance or priority. To the program office’s credit, it collects and tracks what it calls program issues, which are problems identified by program office staff or the system integrator that are process, procedure, or management related. These issues are contained in the program’s Issues-Risk Management Information System (I-RMIS). Among other things, each issue in I-RMIS is to have an opened and closed date and an assigned priority level of high, medium, or low. In addition, the integration contractor tracks issues that its staff identifies related to such areas as system test defects. These issues are contained in the contractor’s Marine Corps Issue Tracking System (MCITS). Each issue in MCITS is to have a date when it was opened and is to be assigned a priority on a scale of 1-5. According to program officials, the priority levels are based on guidance from the Institute of Electrical and Electronics Engineers (IEEE). (See table 6 for a description of each priority level.) However, neither I-RMIS nor MCITS contain all the data needed to reliably produce key measures or indicators of system quality and program maturity. Examples of these limitations are as follows: For I-RMIS, the program office has not established a standard definition of the priority levels used. Rather, according to program officials, each issue owner is allowed to assign a priority based on the owner’s definition of what high, medium, and low mean. By not using standard priority definitions for categorizing issues, the program office cannot ensure that it has an accurate and useful understanding of the problems it is facing at any given time, and it will not know if it is addressing the highest priority issues first. For MCITS, the integration contractor does not track closure dates for all issues. For example, as of April 2008, over 30 percent of the closed issues did not have closure dates. This is important because it limits the contractor’s ability to understand trends in the number of high-priority issues that are unresolved. Program officials acknowledged the need to have closure dates for all closed issues and stated that they intend to correct this. If it is not corrected, the program office will not be able to create a reliable measure of system quality and program maturity. Compounding the above limitations in MCITS data is the program office’s decision not to use contractor-generated reports that are based on MCITS data. Specifically, reports summarizing MCITS issues are posted to a SharePoint site for the program office to review. However, program officials stated that they do not review these reports because the MCITS issues are not their responsibility, but the contractor’s. However, without tracking and monitoring contractor-identified issues, which include such things as having the right skill-sets and having the resources to track and monitor issues captured in separate databases, the program office is missing an opportunity to understand whether proactive action is needed to address emerging quality shortfalls in a timely manner. Program guidance and related best practices encourage trend reporting of change requests as measures or indicators of system stability and quality. These indicators include trends in the number and priority of approved changes to the system’s baseline functional and performance capabilities that have yet to be resolved. To its credit, the program office collects and tracks changes to the system, which can range from minor or administrative changes to more significant changes that propose or impact important system functionality. These changes can be identified by either the program office or the contractor, and they are captured in a master change request spreadsheet. Further, the changes are to be prioritized according to the level described in table 7, and the dates that change requests are opened and closed are to be recorded. However, the change request master spreadsheet does not contain the data needed to reliably produce key measures or indicators of system stability and quality. Examples of these limitations are as follows: The program office has not prioritized proposed changes or managed these changes according to their priorities. For example, of the 572 change requests as of April 2008, 171 were assigned a priority level, and 401 were not. Of these 171, 132 were categorized as priority 1. Since then, the program office has temporarily recategorized the 401 change requests to priority 3 until each one’s priority can be evaluated. The program office has yet to establish a time frame for doing so. The dates that change requests are resolved are not captured in the master spreadsheet. Rather, program officials said that these dates are in the program’s IMS and are shown there as target implementation dates. While the IMS does include the dates changes will be implemented, these dates are not actual dates, and they are not used to establish trends in unresolved change requests. Without the full complement of data needed to monitor and measure change requests, the program office cannot know and disclose to DOD decision makers whether the quality and stability of the system are moving in the right direction. DOD’s success in delivering large-scale business systems, such as GCSS- MC, is in large part determined by the extent to which it employs the kind of rigorous and disciplined IT management controls that are reflected in DOD policies and related guidance. While implementing these controls does not guarantee a successful program, it does minimize a program’s exposure to risk and thus the likelihood that it will fall short of expectations. In the case of GCSS-MC, living up to expectations is important because the program is large, complex, and critical to supporting the department’s warfighting mission. The department has not effectively implemented a number of essential IT management controls on GCSS-MC, which has already contributed to significant cost overruns and schedule delays, and has increased the program’s risk going forward of not delivering a cost-effective system solution and not meeting future cost, schedule, capability, and benefit commitments. Moreover, GCSS-MC could be duplicating the functionality of related systems and may be challenged in interoperating with these systems because compliance with key aspects of DOD’s federated BEA has not been demonstrated. Also, the program’s estimated return on investment, and thus the economic basis for pursing the proposed system solution, is uncertain because of limitations in how the program’s cost estimate was derived, raising questions as to whether the nature and level of future investment in the program needs to be adjusted. In addition, the program’s schedule was not derived using several key schedule estimating practices, which impacts the integrity of the cost estimate and precludes effective implementation of EVM. Without effective EVM, the program cannot reliably gauge progress of the work being performed so that shortfalls can be known and addressed early, when they require less time and fewer resources to overcome. Another related indicator of progress, trends in system problems and change requests, also cannot be gauged because the data needed to do so are not being collected. Collectively, these weaknesses have already helped to push back the completion of the program’s first increment by more than 3 years and added about $193 million in costs, and they are introducing a number of risks that, if not effectively managed, could further impact the program. However, whether these risks will be effectively managed is uncertain because the program has not always followed its defined risk management process and, as a result, has allowed yesterday’s potential problems to become today’s actual cost, schedule, and performance problems. While the program office is primarily responsible for ensuring that effective IT management controls are implemented on GCSS-MC, other oversight and stakeholder organizations share some responsibility. In particular, even though the program office has not demonstrated its alignment with the federated BEA, it nevertheless followed established DOD architecture compliance guidance and used the related compliance assessment tool in assessing and asserting its compliance. The root cause for not demonstrating compliance thus is not traceable to the program office, but rather is due to, among other things, the compliance guidance and tool being limited, and the program’s oversight entities not validating the compliance assessment and assertion. Also, even though the program’s cost estimate was not informed by the cost experiences of other ERP programs of the same scope, the program office is not to blame because the root cause for this is that the Defense Cost and Resource Center has not maintained a standardized cost element structure for its ERP programs and a historical database of ERP program costs for program’s like GCSS- MC to use. In contrast, other weaknesses are within the program office’s control, as evidenced by its positive actions to address the requirements traceability shortcomings that we brought to its attention during of the course of our work and its well-defined risk management process. All told, this means that addressing the GCSS-MC IT management control weaknesses require the combined efforts of the various DOD organizations that share responsibility for defining, justifying, managing, and overseeing the program. By doing so, the department can better assure itself that GCSS-MC will optimally support its mission operations and performance goals and will deliver promised capabilities and benefits, on time and within budget. To ensure that each GCSS-MC system increment is economically justified on the basis of a full and reliable understanding of costs, benefits, and risks, we recommend that the Secretary of Defense direct the Secretary of the Navy to ensure that investment in the next acquisition phase of the program’s first increment is conditional upon fully disclosing to program oversight and approval entities the steps under way or planned to address each of the risks discussed in this report, including the risk of not being architecturally compliant and being duplicative of related programs, not producing expected mission benefits commensurate with reliably estimated costs, not effectively implementing EVM, not mitigating known program risks, and not knowing whether the system is becoming more or less mature and stable. We further recommend that investment in all future GCSS-MC increments be limited if the management control weaknesses that are the source of these risks, and which are discussed in this report, have not been fully addressed. To address each of the IT management control weaknesses discussed in this report, we are also making a number of additional recommendations. However, we are not making recommendations for the architecture compliance weaknesses discussed in this report because we have a broader review of DON program compliance to the BEA and DON enterprise architecture that will be issued shortly and will contain appropriate recommendations. To improve the accuracy of the GCSS-MC cost estimate, as well as other cost estimates for the department’s ERP programs, we recommend that the Secretary of Defense direct the appropriate organization within DOD to collaborate with relevant organizations to standardize the cost element structure for the department’s ERP programs and to use this standard structure to maintain cost data for its ERP programs, including GCSS-MC, and to use this cost data in developing future cost estimates. To improve the credibility of the GCSS-MC cost estimate, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the program’s current economic analysis is adjusted to reflect the risks associated with it not reflecting cost data for comparable ERP programs, and otherwise not having been derived according to other key cost estimating practices, and that future updates to the GCSS-MC economic analysis similarly do so. To enhance GCSS-MC’s use of EVM, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the program office (1) monitors the actual start and completion dates of work activities performed so that the impact of deviations on downstream scheduled work can be proactively addressed; (2) allocates resources, such as labor hours and material, to all key activities on the schedule; (3) integrates key activities and supporting tasks and subtasks; (4) identifies and allocates the amount of float time needed for key activities to account for potential problems that might occur along or near the schedule’s critical path; (5) performs a schedule risk analysis to determine the level of confidence in meeting the program’s activities and completion date; (6) allocates schedule reserve for high-risk activities on the critical path; and (7) discloses the inherent risks and limitations associated with any future use of the program’s EVM reports until the schedule has been risk-adjusted. To improve GCSS-MC management of program risks, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the program office (1) adds each of the risks discussed in this report to its active inventory of risks, (2) tracks and evaluates the implementation of mitigation plans for all risks, (3) discloses to appropriate program oversight and approval authorities whether mitigation plans have been fully executed and have produced the intended outcome(s), and (4) only closes a risk if its mitigation plan has been fully executed and produced the intended outcome(s). To strengthen GCSS-MC system quality measurement, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the program office (1) collects the data needed to develop trends in unresolved system defects and change requests according to their priority and severity and (2) discloses these trends to appropriate program oversight and approval authorities. In written comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, the department stated that it concurred with two of our recommendations and partially concurred with the remaining five. In general, the department partially concurred because it said that efforts were either under way or planned that will address some of the weaknesses that these recommendations are aimed at correcting. For example, the department stated that GCSS-MC will begin to use a recently developed risk assessment tool that is expected to assist programs in identifying and mitigating internal and external risks. Further, it said that these risks will be reported to appropriate department decision makers. We support the efforts that DOD described in its comments because they are generally consistent with the intent of our recommendations and believe that if they are fully and properly implemented, they will go a long way in addressing the management control weaknesses that our recommendations are aimed at correcting. In addition, we have made a slight modification to one of these five recommendations to provide the department with greater flexibility in determining which organizations should provide for the recommendation’s implementation. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Congressional Budget Office; the Secretary of Defense; and the Department of Defense Office of the Inspector General. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3439 or hiter@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to determine whether the Department of the Navy is effectively implementing information technology management controls on the Global Combat Support System-Marine Corps (GCSS-MC). To accomplish this, we focused on the first increment of GCSS-MC relative to the following management areas: architectural alignment, economic justification, earned value management, requirements management, risk management, and system quality measurement. In doing so, we analyzed a range of program documentation, such as the acquisition strategy, program management plan, and Acquisition Program Baseline, and interviewed cognizant program officials. To determine whether GCSS-MC was aligned with the Department of Defense’s (DOD) federated business enterprise architecture (BEA), we reviewed the program’s BEA compliance assessments and system architecture products, as well as versions 4.0, 4.1, and 5.0 of the BEA and compared them with the BEA compliance requirements described in the Fiscal Year 2005 National Defense Authorization Act and DOD’s BEA compliance guidance and evaluated the extent to which the compliance assessments addressed all relevant BEA products. We also determined the extent to which the program-level architecture documentation supported the BEA compliance assessments. We obtained documentation, such as the BEA compliance assessments from the GCSS-MC and Navy Enterprise Resource Planning programs, as well as the Air Force’s Defense Enterprise Accounting and Management System and Air Force Expeditionary Combat Support System programs. We then compared these assessments to identify potential redundancies or opportunities for reuse and determined if the compliance assessments examined duplication across programs and if the tool that supports these assessments is being used to identify such duplication. In doing so, we interviewed program officials and officials from the Department of the Navy, Office of the Chief Information Officer, and reviewed recent GAO reports to determine the extent to which the programs were assessed for compliance against the Department of the Navy enterprise architecture. We also interviewed program officials and officials from the Business Transformation Agency and the Department of the Navy, including the logistics Functional Area Manager, and obtained guidance documentation from these officials to determine the extent to which the compliance assessments were subject to oversight or validation. To determine whether the program had economically justified its investment in GCSS-MC, we reviewed the latest economic analysis to determine the basis for the cost and benefit estimates. This included evaluating the analysis against Office of Management and Budget guidance and GAO’s Cost Assessment Guide. In doing so, we interviewed cognizant program officials, including the Program Manager and cost analysis team, regarding their respective roles, responsibilities, and actual efforts in developing and/or reviewing the economic analysis. We also interviewed officials at the Office of Program Analysis and Evaluation and Naval Center for Cost Analysis as to their respective roles, responsibilities, and actual efforts in developing and/or reviewing the economic analysis. To determine the extent to which the program had effectively implemented earned value management, we reviewed relevant documentation, such the contractor’s monthly status reports, Acquisition Program Baselines, and schedule estimates and compared them with DOD policies and guidance. We also reviewed the program’s schedule estimates and compared them with relevant best practices to determine the extent to which they reflect key estimating practices that are fundamental to having a reliable schedule. In doing so, we interviewed cognizant program officials to discuss their use of best practices in creating the program’s current schedule. To determine the extent to which the program implemented requirements management, we reviewed relevant program documentation, such as the baseline list of requirements and system specifications and evaluated them against relevant best practices to determine the extent to which the program has effectively managed the system’s requirements and maintained traceability backward to high-level business operation requirements and system requirements, and forward to system design specifications, and test plans. To determine the extent to which the requirements were traceable, we randomly selected 61 program requirements and traced them both backward and forward. This sample was designed with a 5 percent tolerable error rate at the 95 percent level of confidence, so that, if we found 0 problems in our sample, we could conclude statistically that the error rate was less than 5 percent. Based upon the weight of all other factors included in our evaluation, our verification of 60 out of 61 requirements was sufficient to demonstrate traceability. In addition, we interviewed program officials involved in the requirements management process to discuss their roles and responsibilities for managing requirements. To determine the extent to which the program implemented risk management, we reviewed relevant risk management documentation, such as risk plans and risk database reports demonstrating the status of the program’s major risks and compared the program office’s activities with DOD acquisition management guidance and related best practices. We also reviewed the program’s mitigation process with respect to key risks, including 25 medium risks in the retired risk database that were actively addressed by the program office, to determine the extent to which these risks were effectively managed. In doing so, we interviewed cognizant program officials responsible, such as the Program Manager, Risk Manager, and subject matter experts to discuss their roles and responsibilities and obtain clarification on the program’s approach to managing risks associated with acquiring and implementing GCSS-MC. To determine the extent to which the program is collecting the data and monitoring trends in the number of unresolved system defects and the number of unaddressed change requests, we reviewed program documentation such as the testing strategy, configuration management policy, test defect reports, change request logs, and issue data logs. We compared the program’s data collection and analysis practices relative to these areas to program guidance and best practices to determine the extent to which the program is measuring important aspects of system quality. We also interviewed program officials such as system developers, relevant program management staff, and change control managers to discuss their roles and responsibilities for system quality measurement. We conducted our work at DOD offices and contractor facilities in the Washington, D.C., metropolitan area, and Triangle, Va., from June 2007 to July 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individual named above, key contributors to this report were Neelaxi Lakhmani, Assistant Director; Monica Anatalio; Harold Brumm; Neil Doherty; Cheryl Dottermusch; Nancy Glover; Mustafa Hassan; Michael Holland; Ethan Iczkovitz; Anh Le; Josh Leiling; Emily Longcore; Lee McCracken; Madhav Panwar; Karen Richey; Melissa Schermerhorn; Karl Seifert; Sushmita Srikanth; Jonathan Ticehurst; Christy Tyson; and Adam Vodraska.
GAO has designated the Department of Defense's (DOD) business systems modernization as a high-risk program because, among other things, it has been challenged in implementing key information technology (IT) management controls on its thousands of business systems. The Global Combat Support System-Marine Corps program is one such system. Initiated in 2003, the program is to modernize the Marine Corps logistics systems. The first increment is to cost about $442 million and be deployed in fiscal year 2010. GAO was asked to determine whether the Department of the Navy is effectively implementing IT management controls on this program. To accomplish this, GAO analyzed the program's implementation of several key IT management disciplines, including economic justification, earned value management, risk management, and system quality measurement. DOD has not effectively implemented key IT management controls provided for in DOD and related acquisition guidance on this program. If implemented effectively, these and other IT management disciplines increase the likelihood that a given system investment will produce the right solution to fill a mission need and that this system solution will be acquired and deployed in a manner that maximizes the chances of delivering promised system capabilities and benefits on time and within budget. Neither of these outcomes is being fully realized on this program, as evidenced by the fact that its first increment has already slipped more than 3 years and is expected to cost about $193 million more than envisioned. These slippages and cost overruns can be attributed in part to the management control weaknesses discussed in this report and summarized below. Moreover, additional slippages and overruns are likely if these and other IT management weaknesses are not addressed. Investment in the system has not been economically justified on the basis of reliable estimates of both benefits and costs. Specifically, while projected benefits were risk-adjusted to compensate for limited data and questionable assumptions, the cost side of the benefit/cost equation is not sufficiently reliable because it was not derived in accordance with key cost estimating practices. In particular, it was not based on historical data from similar programs and it did not account for schedule risks, both of which are needed for the estimate to be considered accurate and credible. Earned value management that the program uses to measure progress has not been adequately implemented. Specifically, the schedule baseline against which the program gauges progress is not based on key estimating practices provided for in federal guidance, such as assessing schedule risks and allocating schedule reserves to address these risks. As a result, program progress cannot be adequately measured, and likely program completion dates cannot be projected based on actual work performed. Some significant program risks have not been adequately managed. While a well-defined risk management plan and supporting process have been put in place, the process has not always been followed. Specifically, mitigation steps for significant risks either have not been implemented or proved ineffective, allowing the risks to become actual problems. The data needed to produce key indicators of system quality, such as trends in the volume of significant and unresolved problems and change requests, are not being collected. Without such data, it is unclear whether the system is becoming more or less mature and stable. The reasons for these weaknesses range from limitations of DOD guidance and tools, to not collecting relevant data. Until they are addressed, DOD is at risk of delivering a solution that does not cost-effectively support mission operations and falls short of cost, schedule, and capability expectations.
Autism is a developmental disability that can cause significant social, communication, and behavioral challenges. Individuals with autism may communicate, interact, behave, and learn in ways that are different from others. The learning, thinking, and problem-solving abilities of individuals with autism can range from gifted to severely challenged. Some individuals with autism need extensive help in their daily lives, while others need less. CDC estimates that about 1 in 68 children have been identified as having autism. Diagnosing autism involves developmental screening and a comprehensive diagnostic evaluation. According to information on CDC’s website, developmental screening consists of a short test to tell if a child is learning basic skills when expected based on the child’s age, or if the child might have delays. During developmental screening, a doctor might ask the parent some questions or talk and play with the child to observe whether the child plays, learns, speaks, acts, and moves as expected. A delay in any of these areas could be a sign of a problem. The American Academy of Pediatrics recommends that all children be screened for developmental delays and disabilities during regular well-child doctor visits and specifically for autism at 18 and 24 months. If a doctor identifies any signs of a problem, a comprehensive diagnostic evaluation, which provides a thorough review that may include looking at the child’s behavior and development, should be performed and parents interviewed. In many cases, the doctor may refer the child and family to a specialist, such as a developmental pediatrician or child psychologist, for further assessment and diagnosis. There are a variety of interventions that are used to treat young children with autism who may face significant social, communication, and behavioral challenges. Typical therapies include physical and occupational therapy, speech and language therapy, and behavioral therapies. For example, occupational therapy can teach a child skills, such as dressing and relating to people in school and social situations, to help the child live as independently as possible. Speech and language therapy can help improve a child’s communication skills, such as verbal skills or gestures. There are many types of behavioral therapies used to treat children diagnosed with autism. For example, applied behavior analysis (ABA) is a commonly used framework to provide intervention services to children with autism. It uses behavior modification principles, such as positive reinforcement, to increase or decrease targeted behaviors. Other interventions can be incorporated into a treatment plan for a child with autism, such as parent-implemented interventions— structured parent training programs through which parents learn intervention practices that they can implement with their child at home and in the community. Children with disabilities—including children with autism—can receive intervention services through IDEA, which is overseen at the federal level by Education. Part B of IDEA requires states to make a free appropriate public education available to eligible children with disabilities as a condition of grant eligibility. In general, under Part B, Education provides formula grants to states to fund a portion of the excess costs incurred by school districts to provide special education and related services— referred to in this report as “special education services”—to students with disabilities ages 3 through 21, including those with autism, who meet certain eligibility criteria. Part B of IDEA requires that the special education services that each individual student needs in order to receive a free appropriate public education be included in the student’s individualized education program (IEP). Each student’s IEP must include, among other information, the child’s present levels of academic achievement and functional performance, measurable annual goals, and the special education and related services to be provided to enable the child to advance appropriately toward attaining the annual goals and to be involved and make progress in the general education curriculum. The IEP is developed by a team of the child’s teachers, parents, a school district representative, other related services personnel, and whenever appropriate, the child. DOD is also required to provide special education services to eligible children who are served by its schools, although the department does not receive funding from Education. Through Part C of IDEA, Education provides formula grants to states to fund a portion of the costs of providing early intervention services to infants and toddlers through age 2 with developmental delays or who have been diagnosed with a physical or mental condition with a high probability of resulting in developmental delays. Under Part C, children are required to have an individualized family service plan (IFSP), which contains information about the services necessary to facilitate a child’s development and enhance the family’s capacity to facilitate the child’s development. Through the IFSP process, family members and service providers are intended to work as a team to plan, implement, and evaluate services tailored to the family’s unique resources, priorities, and concerns related to enhancing the development of the child as identified through the assessment of the family. Again, although DOD does not receive funding from Education, DOD is responsible for providing early intervention services to infants and toddlers through age 2 who are eligible to enroll in a DOD school. Some children with disabilities—including children with autism—can receive intervention services through federal health insurance programs such as Medicaid, a joint federal-state program overseen by CMS that finances the delivery of health care services for a diverse low-income and medically needy population. Although federal law sets minimum requirements for eligibility and coverage, states are accorded significant flexibility to design and implement their Medicaid programs, resulting in over 50 state programs that vary, for example, in how health care is financed and delivered. Children whose household incomes are above the threshold for Medicaid eligibility may have health care services financed through their state’s CHIP. CHIP is also a joint federal-state program overseen by CMS that states administer under broad federal requirements; and like Medicaid, the programs vary in eligibility and services covered. States can use Medicaid or CHIP to cover services such as physical and occupational therapy, and speech and language therapy, which may also be eligible IDEA early intervention and special education services. DOD offers health care services for active duty and retired uniformed servicemembers and their families, as well as National Guard and Reserve members and their families through TRICARE. Under TRICARE, beneficiaries may obtain care from military treatment facilities or through its purchased care system of civilian providers. The TRICARE program offers beneficiaries a managed care option, a preferred provider organization option, and a fee-for-service option—as well as other options available to specific eligibility groups. For example, children of active duty servicemembers may also qualify for the Extended Care Health Option, which is a supplementary program that offers additional coverage to beneficiaries with special needs. Among other requirements, beneficiaries must have a qualifying medical condition, which includes autism, to register in the Extended Care Health Option. In recent years, DOD has had a series of demonstrations to increase the provision of ABA to servicemembers’ family members who are diagnosed with autism. In March 2008, DOD began the Enhanced Access to Autism Services Demonstration to increase access to ABA for family members of active duty servicemembers by allowing ABA services to be provided by behavior technicians. In August 2012, DOD expanded ABA coverage to non-active duty family members through the TRICARE basic program. In July 2013, DOD began the ABA Pilot to provide supplemental ABA services to non-activity duty family members who seek additional services. The Autism CARES Act reauthorized the Interagency Autism Coordinating Committee (IACC), which is a federal advisory committee that was initially established under the Children’s Health Act of 2000. The act directs the IACC to monitor autism research—and to the extent practicable, services and support activities—across all relevant federal departments and agencies, including coordination of federal autism activities. The Autism CARES Act also requires the IACC to develop and annually update a strategic plan for autism research, as well as for services and support activities, to the extent practicable, and make recommendations to ensure that federal autism activities are not unnecessarily duplicative. Further, it requires the IACC to meet at least twice annually. As of February 2016, the IACC consisted of 16 federal members and 15 nonfederal members, which included representatives from advocacy groups, university professors, individuals with autism, and parents of children with autism. In our November 2013 report, we found that the IACC’s and federal agencies’ efforts to coordinate and monitor federal autism activities were limited and that the IACC’s data on autism research was outdated. We made recommendations to address these findings. The limited coordination was particularly concerning given that we also found that 84 percent of the autism research projects funded by federal agencies from fiscal years 2008 through 2012 had the potential to be duplicative, because the projects were categorized to the same research objectives in the IACC strategic plan. The research objectives were broad enough to fund research that may not be duplicative, and agencies funding research in the same areas can be appropriate and advantageous—especially with a research topic as complex and heterogenetic as autism. Further, funding similar research on the same topic is sometimes appropriate for purposes of replicating or corroborating results. However, agencies funding research in the same area can also lead to unnecessary duplication and wasting of scarce federal resources, if funding decisions are not effectively coordinated. We concluded that the limited coordination and monitoring of federal agencies’ autism research could lead to numerous projects being funded to address a few specific areas within the realm of autism research—some of the projects having the potential to be unnecessarily duplicative—while other areas may be left unexplored. Consistent with our November 2013 recommendations, the Autism CARES Act directs the Secretary of Health and Human Services to designate an existing official within HHS to oversee—in consultation with the Secretaries of Defense and Education—national autism research, services, and support activities. This official is required to implement autism activities, taking into account the strategic plan developed by the IACC, and ensure that federal autism activities are not unnecessarily duplicative. Agencies specifically solicited research on early autism identification and interventions, and funded research in this area as a result of these solicitations. Other mechanisms agencies used to encourage early identification and interventions included funding for access to care and services, training, information resources, and awareness campaigns. Lastly, two HHS agencies have programs that serve young children and include developmental screenings for enrollees. Through FOAs, four agencies—DOD, Education, NIH, and the Health Resources and Services Administration (HRSA), another HHS agency— solicited research proposals on early screening, diagnosis, and interventions for young children with autism from fiscal years 2012 through 2015. DOD had 3 FOAs, Education had 4, HRSA had 8, and NIH had 10 FOAs soliciting research in these areas during this time period. As a result of these specific solicitations, these agencies funded research projects totaling approximately $109 million during this time period. DOD, Education, NIH, and the Administration for Community Living (ACL), an agency within HHS, funded an additional $286 million on research related to early screening, diagnosis, and interventions for autism, though not through FOAs that specifically solicited this type of research from fiscal years 2012 through 2015. For example, Education funded autism research through FOAs that solicited projects on early intervention and early learning in special education in general, as well as through FOAs that solicited research on commercially viable education technology products. NIH also funded intramural research related to autism identification and interventions. See table 1 for the amounts that agencies awarded through FOAs that specifically solicited research on autism early identification and interventions from fiscal years 2012 through 2015, as well as research funded through other solicitations. In addition to soliciting individual research projects, agencies provide funding for centers and networks to conduct research on a variety of autism-related topics, including early identification and interventions. NIH solicits applications for Autism Centers of Excellence to research autism diagnosis, treatment, and optimal means of service delivery, among other topics. For example, officials from one Autism Center of Excellence stated that they were developing eye tracking technology to screen children for autism early in life, as a lack of eye contact is one of the signs of autism. Additionally, CDC has provided supplemental funding to six Autism and Developmental Disabilities Monitoring Network sites to monitor the prevalence of autism in 4-year-old children to better understand their characteristics to increase early identification. Agencies have established various mechanisms to encourage early screening, diagnosis, and interventions for young children with autism. These mechanisms include grants to improve access to care and services and increase provider training, as well as the development of information resources, and awareness campaigns. HRSA’s autism state implementation grant program provides funding to improve access to comprehensive, coordinated health care and related services for children and youth with autism and other developmental disabilities. Most recently, HRSA provided multi-year funding to nine states beginning in fiscal years 2013 and 2014. HRSA required its grantees that received funding in September 2013 or later to focus their efforts on promoting early identification, diagnosis, and entry into services based on lessons learned from early state program investments and expressed needs in the field. Officials from one of these states told us that they pursued the grant to connect the siloed infrastructure that exists within the state and identify children with autism at an earlier age than was occurring in the state. This grantee has conducted activities related to screening, assessment, and early intervention—including offering training to primary care providers, health department officials, interdisciplinary child development centers, and other professionals on developmental and autism screening and autism warning signs—and has plans for sustaining the activities beyond the 3-year grant. Federal agencies provide funding to train educators and practitioners. For example, in fiscal year 2013, HRSA’s two training programs—Leadership Education in Neurodevelopmental and Other Related Disabilities, and Leadership Education in Developmental Behavioral Pediatrics—trained more than 18,000 professionals, including psychologists and pediatricians. These programs provide training on evidence-based services for children with autism and developmental disabilities, and on providing comprehensive diagnostic evaluations to confirm or rule out an autism diagnosis. HRSA also collaborated with CDC in developing Autism Case Training, which is available to the public on CDC’s website. Autism Case Training is designed to educate future health care providers on fundamental components of identifying, diagnosing, and managing autism. Education also funds grants for training scholars and professionals in special education, early intervention, and related services programs, which could include training specific to autism. Agencies have developed documents and websites to provide information and resources on interventions for young children with autism. For example, Education funded the National Professional Development Center on Autism Spectrum Disorder to promote the use of evidence- based practices for children and youth with autism. The center identified 27 evidence-based interventions that were shown to be effective through scientific research for individuals with autism. These interventions are included on the center’s website, as well as instructions on implementing the interventions and an implementation checklist. ACL funded the organization Autism NOW, which maintains a website that provides information and links to resources, including for early detection, early intervention, and early education. DOD also developed a directory for military families to provide them with information on the educational services that are close to specific military installations in select states. HHS and some of its agencies, such as CDC and HRSA, maintain websites that provide resources for families and individuals with autism, including information on diagnosing autism and interventions. Also, another HHS agency, the Agency for Healthcare Research and Quality (AHRQ) published a report in August 2014 on behavioral interventions for autism that focused on children from birth to age 12. According to AHRQ documentation, this report could be used to, among other things, provide clinicians who treat children with autism the evidence needed for different treatment strategies. Furthermore, ACL provides funding to the University Centers for Excellence in Developmental Disabilities Education, Research, and Service, which was established in 1963 to help ensure that Americans with disabilities can be independent and productive. ACL’s funding supports, in part, the centers’ core functions, which include information dissemination, research, and training of students and fellows in multiple professional disciplines, as well as community training to professionals working in multiple disciplines supporting individuals with disabilities. According to ACL officials, while autism is not a specific area of emphasis for the centers, a substantial number of their information dissemination, research, and training activities address autism. For example, according to ACL officials, one center disseminated autism guidelines to programs that serve young children in its state, while another center implemented a project to examine ways to reduce barriers to conduct screening for developmental disabilities, including autism, in underserved populations. Multiple federal agencies are involved in producing awareness campaigns related to the identification of developmental delays. In March 2014, a group of HHS agencies—the Administration for Children and Families (ACF), ACL, CDC, CMS, HRSA, NIH, and the Substance Abuse and Mental Health Services Administration—and Education launched the Birth to 5: Watch Me Thrive! initiative to encourage developmental and behavioral screening and support for children—including those with autism—their families, and the providers who care for them. The initiative seeks to celebrate milestones, promote universal screening, identify possible delays and concerns early, and enhance developmental supports. In addition, CDC’s “Learn the Signs. Act Early.” initiative promotes awareness of healthy developmental milestones in early childhood, the importance of tracking each child’s development, and the importance of acting early if concerns are identified. The initiative works with state, territorial, and national partners to improve early childhood systems by enhancing collaborative efforts to improve screening and referral to early intervention services, to promote “Learn the Signs. Act Early.” messages and tools, and improve early identification efforts in their states and territories. ACF and HRSA have programs that include developmental screenings for enrollees. ACF’s Head Start and Early Head Start programs promote the school readiness of young children from low-income families from birth to age 5. Head Start and Early Head Start programs also support the mental, social, and emotional development of children. In addition to education services, programs provide children and their families with health, nutrition, social, and other services. All children in Head Start are required to receive developmental screening—including speech, hearing, and vision—within 45 days of the child’s entry into the program. Children who need further specialized assessment to determine whether they have a disability, such as autism, may be referred for an evaluation. HRSA has three programs that seek to reduce the age at which children are screened for developmental delays. Title V Block Grant: This program provides grants to all states to implement plans that address the health services needs within the state for the target population of mothers, infants, and children, including children with special health care needs. According to HRSA officials, as part of this program, 40 states have selected to address a new National Performance Measure looking at the percent of children ages 10 months to 71 months receiving a developmental screen using a parent-completed screening tool. The states’ intent in selecting this measure is to increase the proportion of children, including those with autism, who are screened at a younger age and who receive treatment. Early Childhood Comprehensive Systems Program: This program awards grants to states and organizations with the goal of ensuring that all children birth to age 3 are receiving the appropriate services at the appropriate time. The program brings together primary care providers, teachers, families, and caregivers to develop seamless systems of care for children from birth to age 3 using one of three strategies. One of these strategies is increasing developmental screening of young children to identify and treat problems early, such as autism. In 2013—the most recent grant competition—15 states received grants to implement this strategy. Federal Home Visiting Program: HRSA, in partnership with ACF, provides funding to states for the Home Visiting Program, which supports pregnant women and families, and helps at-risk parents of children from birth to kindergarten access resources and develop skills to raise children who are physically, socially, and emotionally healthy and ready to learn. According to HRSA officials, children enrolled in the Home Visiting Program receive an initial baseline developmental screening and may receive additional screening depending on how the program is administered in the state. In 2014, HRSA revised this program to support the goal of reducing the age of diagnosis of developmental disabilities, including autism, by bringing together a select group of grantees to, among other things, identify methods to increase the percentage of children who receive a developmental screening. Individualized intervention services are provided to young children with autism through IDEA early intervention and special education programs; additionally, the five states we examined and DOD have taken specific actions to help respond to the needs of children with autism that they serve. Data on children with autism served through IDEA special education programs is likely underreported as some of these children may be counted in other disability categories, such as the development delay category. Children enrolled in federal health care programs— Medicaid, CHIP, or TRICARE—received a variety of intervention services through these programs. Intervention services provided to young children with autism through IDEA early intervention and special education programs are individualized to the needs of each child; additionally, selected states and DOD have taken specific actions to help respond to the needs of children with autism that they serve. According to IDEA regulations, the services a child with autism receives are determined by the team that develops the child’s IFSP (for children in early intervention programs) and IEP (for children in special education programs), which includes the child’s parent, and must be individualized to the child. Officials from some of the five states we spoke with—California, Massachusetts, North Carolina, Ohio, and Texas—and DOD made comments regarding the need for individualized services regardless of a child’s diagnosis. For example, some state officials commented that specific methodologies or services—such as ABA—could be provided to a child within the context of IDEA-required services if these services are identified as a need for that child, regardless of whether the child has autism or another type of developmental disability. Further, children with autism have needs that can vary considerably and therefore the services provided to these children would vary. DOD officials stated that children with autism who are eligible for special education services—like children with other disabilities—can be provided specialized instruction, intervention strategies, modifications of the general education curriculum, and other related services, such as occupational therapy, physical therapy, and speech and language services, depending on the individual needs of the child. The five states we examined and DOD reported taking specific actions to help respond to the needs of young children with autism that they serve. Some of these actions are provided to children as part of IDEA early intervention or special education programs, while others are provided in addition to these programs. The following are examples of actions taken. California has 21 regional centers in the state that administer California’s early intervention program. According to California officials, funding is made available to each center in order to have an autism specialist on staff that coordinates and directs the diagnostic and treatment practices for the families that they serve. In 1998, Massachusetts began the autism specialty services program to supplement its early intervention program. If a child has an autism diagnosis and is enrolled in Massachusetts’ early intervention program, the child can also enroll in the autism specialty services program and receive autism-specific early intervention services, in addition to general early intervention services. According to Massachusetts officials, the state began the autism specialty services program because many general early intervention providers did not have the appropriate skill set to work with children with autism. Massachusetts has approved 17 providers for autism specialty services across geographic areas. The families choose providers, who are generally in their area, and the providers conduct intake assessments. According to state officials, under the autism specialty services program, children usually receive 10 to 30 hours a week of intensive behavioral intervention in their homes or care centers, in addition to general early intervention services. The autism specialty services program uses interventions for autism including the Early Start Denver Model, Floortime, and ABA. According to Massachusetts officials, 1,842 children up to age 3 received autism specialty services in the state’s fiscal year 2015. At the time of our review, Massachusetts did not have waiting lists for the autism specialty services program. Massachusetts officials stated that there are waiting lists to get an autism diagnosis—a requirement to receive the specialty services—especially in the western part of Massachusetts. Massachusetts uses a combination of state funding, Medicaid, and private insurance to pay for the program. In 2010, North Carolina partnered with the Carolina Institute for Developmental Disabilities at the University of North Carolina at Chapel Hill to develop clinical guidelines for early intervention services for children with autism. This effort was partially funded by ACL and HRSA grants. The guidelines outline how to integrate information on autism into the state’s early intervention program and contain information on screening for autism, primary models of interventions for autism, and working with parents to implement interventions. Additionally, in 2014, North Carolina worked with professionals at the University of North Carolina at Chapel Hill’s TEACCH Autism Program to organize and conduct training for clinicians in 11 of the state’s 16 Children’s Developmental Services Agencies—the agencies that administer the state’s early intervention program. The training, funded by a HRSA grant, featured the use of the Autism Diagnostic Observation Schedule, Second Edition—a semi-structured assessment of communication, social interaction, play, and restricted and repetitive behaviors for individuals suspected of having autism. North Carolina’s special education program has developed an autism plan that outlines goals related to building capacity within the school districts to strengthen the provision of autism interventions. For example, this plan includes goals for providing training to teachers related to serving children with autism. North Carolina’s special education program also holds an annual conference that gathers together education professionals and parents of children with disabilities and includes sessions on serving children with autism. According to North Carolina officials, the state also provides funding to autism teams in local school districts that submit a plan on how they will strengthen the instructional practices and services for children with autism in their district, including the use of best practices. Beginning in 2008, Ohio funded the Autism Diagnosis Education Project, which facilitates partnerships between community-based primary care physicians and professionals providing early intervention services to increase access to local and timely standardized, comprehensive diagnostic evaluations for children suspected of having autism. In this program, once an early intervention team has a question about whether a child being served might have autism, the team works with a physician located near the child to make (or rule out) an autism diagnosis. Since its inception, the project has expanded to include 46 participating counties, 330 early intervention professionals, and 39 partner physicians, and the average age of diagnosis has decreased to 29 months. From January 1, 2013, through May 21, 2015, 301 children were assessed in the program and 52 percent were diagnosed with autism. This project is funded through state funds. In 2011, Ohio began to implement an early intervention program across the state, referred to as the Play & Language for Autistic Youngsters (PLAY) project. PLAY is a parent-implemented intervention. Specifically, the PLAY project trains early intervention specialists on certain principles, methods, and techniques that emphasize following the child’s lead as a means for improving social impairment, a core symptom of autism. These early intervention specialists teach parents how to implement and use the intervention in everyday interactions with their child—PLAY providers ask parents to implement PLAY 15 to 20 hours a week. The state has held four trainings since 2011 with about 150 participants. According to Ohio officials, of Ohio’s 88 counties, 45 have early intervention specialists who are either trained, or are in the process of being trained, in PLAY. An additional 17 counties have access to PLAY providers. For children enrolled in Ohio’s early intervention program, the PLAY curriculum may be indicated as an early intervention service need on a child’s IFSP if the IFSP team believes that the PLAY methods and strategies can better help address the family’s outcomes than more traditional service delivery methods, according to Ohio officials. The state, which funds the PLAY project trainings through its general revenue fund, has received positive feedback from the providers and families involved in the PLAY project. In 2007, Texas began requiring the team that develops an IEP to consider 11 strategies when forming IEPs for children diagnosed with autism enrolled in its special education program through what is known as the Autism Supplement. According to Texas officials, the Autism Supplement was designed so that the IEP team would look at the unique characteristics of children with autism. Because not all strategies may be suitable for use with every child, the team has the option to exclude any of the 11 strategies from the IEP, but must provide a written rationale for the exclusion. In 2008, Texas established the Autism Program, which provides ABA to children diagnosed with autism. According to Texas officials, the purpose of the program is to make ABA more accessible, particularly to children diagnosed with autism who are having difficulty in school. The Autism Program is not part of Texas’ special education program, but frequently serves children who are enrolled and receiving services through its special education program. According to Texas officials, the program works with the school districts to avoid duplication of services the districts provide and to make ABA services available after school. Board certified behavior analysts provide oversight and treatment plans; actual treatment services are largely provided by registered behavior technicians. According to Texas officials, about 295 children were served by the Autism Project in the state’s fiscal year 2014—84 of which were ages 3 through 5. According to these officials, the program is funded through Texas’ general revenue fund and a limited amount of private insurance reimbursement. At the time of our review, the program maintained a waiting list of about 1,150 children and had eight providers in six communities due to limited funding, so very few counties in Texas were covered. State officials noted that this number was likely to increase due to action taken during a previous state legislative session to increase funding. In April 2015, after learning about the PLAY project in Ohio, the Wright- Patterson Air Force Base in Ohio began a PLAY project pilot funded for one year by the Air Force Surgeon General. Two full time staff were hired and trained in PLAY. The pilot has provided PLAY training to 24 enrolled families as an intervention parents can implement with their children diagnosed with autism ages 18 months to 6 years to improve social interactions. PLAY is a transportable intervention parents can take with them when they move to a new duty station. In April 2016, Air Force officials told us that the pilot had been funded through fiscal year 2019. DOD officials told us that it supports continued Air Force funding to allow sufficient time to determine outcomes of the pilot. Additionally, according to DOD officials, autism specialists are available to all DOD schools needing assistance with the education of a child age 3 and older with autism. Data reported to Education from 49 states and the District of Columbia indicate that approximately 66,000 children ages 3 through 5 with autism received services in school year 2014-2015. However, this is likely an undercount of the children with autism receiving special education services, because children with autism may not be reported to Education under the autism disability category. For children enrolled in special education programs, states are required to report to Education the number of children receiving services by disability category, including autism. However, states may use a general disability category, “developmental delay,” when reporting, either because the child may not yet be diagnosed with autism, or the child may be diagnosed, but the parents prefer to use the general disability category for privacy purposes. Further, communication difficulties are a typical symptom of autism, and Education, DOD, and some state officials also told us that children with autism may be reported under the disability category “speech and language impairment.” Children who are placed in the “developmental delay” or “speech and language impairment” category would not appear in the autism category. Certain states’ use of disability categories may also influence the number of children with autism reported by the state. For example, California and Texas do not allow the use of the “developmental delay” category. Ohio pays its school districts based on the number of children served and their disability categories; school districts get more funding for students in the “autism” disability category than those in the “developmental disability” category. See appendix II for the number of children ages 3 through 5 in states’ special education autism category, by state, in school year 2014-2015. The actual number of children with autism receiving early intervention services is also unknown. States are required to report to Education the total number of children served under early intervention programs, but not by disability category. DOD also does not collect these data. DOD and Education officials both stated the specific disability designation of a child does not dictate the types of early intervention services that a child receives. Further, it is common for children under age 3 to not have a specific diagnosis. Children enrolled in Medicaid or CHIP from our five selected states— Delaware, Georgia, Illinois, Kentucky, and Minnesota—received a variety of intervention services during fiscal year 2013. Specifically, we identified 8,208 children ages 1 through 5 with autism in these states and almost all received intervention services. Children age 5 accounted for the largest age group of children that we identified with autism; however, a large proportion of children were ages 3 and 4. We also found that nearly 20 percent of children identified with autism were ages 1 or 2. See figure 1 for the distribution of children enrolled in Medicaid or CHIP from these five states and identified with autism by age. Over half of the services young children identified with autism received were within the speech, language, and audiology category and the physical and occupational therapy category—with the former category making up about one-third of the total intervention services received by these children in fiscal year 2013. See figure 2 for the percentage of intervention services received by service category. When services received are examined by age group, speech, language, and audiology services remain the most commonly received services; however, there is variation in the other categories of services received. Among children ages 1 and 2, physical and occupational therapy were nearly as common as speech, language, and audiology services, and behavioral services and home care and skills training were less common. Beginning at age 3, behavioral services and home care and skills training became more frequently received. Figure 3 shows the category of services received by young children identified with autism by age group. While autism services are not a specified Medicaid benefit, CMS issued an informational bulletin in July 2014 that may result in more children receiving these services under Medicaid. Specifically, the informational bulletin clarified states’ options for providing autism-related services to children under various Medicaid authorities. It also discussed requirements related to services for children under the Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) benefit. Some states have been using other Medicaid authorities, such as home and community-based services waivers, to provide behavioral therapy services, such as ABA. Because waivers cover a limited number of beneficiaries, states’ Medicaid programs may not have been able to fulfill the need for services of children with autism through a waiver. CMS officials told us that the clarifying bulletin would likely result in an increase in the number of children receiving such services, as states may have to transition these individuals to the EPSDT benefit under their state plans, which must be furnished to beneficiaries statewide. For at least one state, this may already be the case. Utah Department of Health officials told us that they transitioned to provide ABA services under their state plan in July 2015 as a result of this guidance. While about 380 children received services under Utah’s autism waiver in the state’s fiscal year 2015, that number increased to about 455 in the first 5 months following the transition, and officials estimate that approximately 4,000 children may now be able to receive autism services through their state plan. CMS officials stated that states may choose to amend their state plan to include autism treatment services in an effort to be more transparent about the services available to children diagnosed with autism. At the time of our review, CMS officials indicated that 7 states had recently approved Medicaid state plans to include autism treatment services, and an additional 18 states have either submitted draft changes to their Medicaid state plan—known as amendments—to cover such services or are in discussions with CMS officials about the autism treatment services they propose to cover in the state plan. Children enrolled in DOD’s TRICARE that were identified with autism received a variety of intervention services in fiscal year 2014. Specifically, 8,103 children ages 1 through 5 identified with autism were eligible to receive services through TRICARE and almost all received services. About half of the children identified with autism were ages 3 and 4, and about 30 percent of children identified with autism were ages 1 or 2. See figure 4 for the distribution of children identified with autism by age. Unlike Medicaid and CHIP beneficiaries, we found that young children enrolled in TRICARE and identified with autism most commonly received behavioral services, which comprised about one-third of intervention services received by these children. However, speech, language, and audiology services, and physical and occupational therapy services still made up a large portion of the intervention services received by these children. See figure 5 for the percentage of intervention services received by service category. When services received are examined by age group, behavioral services remain the most commonly received services by children in most age groups; however, there is variation in the other categories of services received. At age 1, children most commonly received speech, language, and audiology services; physical and occupational therapy were also commonly received. Among children age 2, behavioral services were about as common as physical and occupational therapy, and speech, language, and audiology services. Behavioral services were notably more common than other services among children ages 3 through 5. (See fig. 6.) In 2014, DOD offered ABA to children—who were TRICARE beneficiaries and had an autism diagnosis—through autism demonstrations. Of the approximately 8,103 children enrolled in TRICARE that we identified with autism, 3,788 (47 percent) were enrolled in the demonstrations. Most of the behavioral services received by children through TRICARE went to those who participated in the demonstration—an expected finding given that the demonstration focused on providing ABA and related behavioral services that may not have been as readily accessible by non- participants. Overall, children enrolled in the demonstrations received nearly three times as many intervention services as children who were not enrolled in the demonstrations. See figure 7 for the percentage of total intervention services received by children who were enrolled in the demonstrations compared to those who were not enrolled in the demonstrations. Information on the expenditures related to providing intervention services to young children identified with autism enrolled in Medicaid, CHIP, and TRICARE is available in appendix III. HHS has recently taken steps that could help address recommendations we made in November 2013. Specifically, to promote better federal coordination and avoid the potential for unnecessary duplication, we recommended that (1) the IACC and NIH identify projects through the department’s monitoring of federal autism activities that may be unnecessarily duplicative and thus candidates for consolidation or elimination, and (2) DOD, Education, HHS, and NSF determine methods for identifying and monitoring the autism research conducted by other agencies. To develop these recommendations, we applied criteria from federal internal control standards and best practices for collaboration from our prior work, which state that tracking and monitoring are key activities that can benefit interagency collaborative mechanisms. Since our 2013 report was issued, HHS has continued to disagree that our recommendations were warranted. HHS has recently taken actions required by the Autism CARES Act that could help coordinate federal autism research and implement our recommendations. First, as directed by the act, the Secretary of Health and Human Services designated an official to serve as the Autism Coordinator to oversee national autism research, services, and support activities and ensure that autism activities funded by HHS and other federal agencies are not unnecessarily duplicative. HHS announced this designation in April 2016, while a draft of this report was at the department for comment. Second, the Autism Cares Act requires that the IACC’s strategic plan include recommendations to ensure that autism research funded by HHS and other federal agencies is not unnecessarily duplicative. While the 2013 strategic plan—released in April 2014—is the most recent plan, an update to the plan has been under discussion since late 2015. Specifically, the IACC met for the first time as a full committee in November 2015—16 months after its last full committee meeting. During this meeting, as well as subsequent meetings in January and April 2016, NIH staff and IACC members discussed updating the strategic plan. The requirement to include the aforementioned recommendations was discussed; however, no specific details for how this will be accomplished were identified. In addition to recent steps taken by HHS in response to Autism CARES Act requirements, NIH also released fiscal year 2011 and 2012 data on federal autism research, which the agency collects on behalf of the IACC. These data were made available in April 2016, while a draft of this report was at the department for comment. Specifically, these data were released into the Autism Spectrum Disorder Research Portfolio Analysis Web Tool (Web Tool), the IACC’s online database on autism research. Prior to this new release, the Web Tool contained fiscal year 2008 through 2010 data. NIH, on behalf of the IACC, also released the 2011- 2012 IACC Portfolio Analysis Report, which provides an analysis of autism research funding in 2011 and 2012, as well as a five-year overview of autism research funding by the U.S. government and private sector and five-year trends (2008 through 2012) by each of the seven research areas in the IACC’s strategic plan. NIH officials told us that they have also collected data on autism research that was federally funded in fiscal year 2013 and plan to release that data in the second half of calendar year 2016. Although HHS continues to disagree that our recommendation to develop methods for improved cross-agency coordination was warranted, HRSA took a positive step in April 2014 by contacting DOD to determine whether any potential overlap existed between the agencies’ programs. HRSA officials told us that they reviewed abstracts of all currently funded DOD research projects and found no scientific overlap. HRSA officials also used information from DOD, HHS agency websites, and NIH’s online database when developing new FOAs. For example, in two FOAs HRSA chose to focus exclusively on populations served by HRSA’s Maternal and Child Health Bureau in order to help avoid potential duplication with other federal agencies. During our review, NIH officials reiterated their position that they believe their processes are adequate to avoid unnecessary duplication and provided us with the 2012 program officer handbook, which outlines the responsibilities of NIH program officers—some of which will help avoid potential unnecessary duplication and were included in our November 2013 report. According to NIH officials, a fundamental part of an NIH program officer’s responsibility is to assure that federal taxpayer funds are expended on research projects that will produce the most effective, efficient, and productive results. The program officer handbook outlines project officers’ responsibility to stay abreast of the scientific literature and attend professional and scientific meetings, which we reported in November 2013. It also discusses the officers’ role in reviewing the “other support” section of research applications. This section details the other active and pending funding available in direct support of an individual’s research endeavors and is provided by the applicant for all individuals designated in a research application as a principal investigator. Program officers must review this section for scientific overlap, among other information. While this type of review is important, as we described in our November 2013 report, it only helps to ensure that an applicant, and the applicant’s principal investigator, is not submitting essentially the same research application to multiple funding sources. This review would not uncover research from different applicants with different principal investigators, which have already been funded, and that may be unnecessarily duplicative of the applicant’s research; in other words, a project with the same purpose, strategies, and target population that is not necessary to corroborate or replicate prior research results. The program officer handbook also includes a description of several databases and web-based tools that are available for the program officers’ use in fulfilling their responsibilities. However, although this information is provided to program officers, NIH officials told us that the agency does not dictate which specific tools or databases program officers should use to identify similar grants by a different principal investigator for each grant funding decision. NIH continues to have limited procedures in place to help ensure that program officers identify potentially unnecessarily duplicative research by different principal investigators when making funding decisions. Officials from the other agencies that were included in our recommendation—DOD, Education, and NSF—told us that they have taken initial steps to monitor other federal agencies’ research. DOD officials told us that the department has finalized an interagency agreement with NIH to complete a pilot study aimed at developing requirements and testing the feasibility of transferring DOD medical research application data to a NIH data system. According to DOD officials, this transfer of data would allow multiple agencies and the public to view research application data to assist in the identification of potential duplication and facilitate funding decisions. DOD officials anticipate that the feasibility studies will conclude by June 2016. Additionally, Education officials told us that they have reached out to HHS and are awaiting guidance on coordination from HHS and in the interim will continue to participate in IACC meetings. Education officials also stated that the department anticipates funding, pending congressional appropriations, model demonstrations projects focused on autism. These projects will build on existing research on promising evidence-based practices for autism by identifying challenges associated with their implementation. According to Education officials, the department will coordinate with the IACC and review relevant research prior to soliciting applications related to these research projects. Also, even though the agency is not a member of the IACC, NSF officials told us that they observe IACC meetings when convened and check the IACC’s Web Tool to monitor autism research funded by other federal agencies and to help avoid unnecessarily duplicative research. We acknowledge the steps taken by the agencies to respond to our November 2013 recommendation, as well as in response to the Autism CARES Act; however, continued action is needed to develop these initial steps into methods for identifying and monitoring federal autism research that are consistently applied. This is especially important given that, as we previously reported in November 2013, agencies are funding research in the same areas, which creates the potential for unnecessary duplication. While we are not making additional recommendations, we believe that our 2013 recommendations remain valid, and that HHS’s continued fulfillment of the provisions in the Autism CARES Act could help the department implement our recommendations. We provided a draft of this report to DOD, Education, HHS, and NSF for review and comment. Education and HHS provided written comments, which are reprinted in appendixes IV and V. These departments, along with DOD, also provided technical comments, which we incorporated as appropriate. NSF did not provide any comments. Education and HHS directed many of their comments to our third finding, which updated the status of agency actions to implement recommendations contained in our November 2013 report. In that report, we found that many autism research projects funded by federal agencies had the potential to be duplicative, because the projects were categorized to the same research objectives in the IACC strategic plan. In their comments on this report, Education and HHS disagreed that there was potential for duplication and questioned the basis of our analysis. The departments stated that the 78 research objectives—which our analysis was based on—are broad, and therefore, may require attention from researchers of different disciplines in order to address the complexity and heterogeneity of autism. This may necessarily involve funding of multiple projects from more than one federal agency. Education stated that a careful review of the projects themselves is needed to determine actual duplication. As we noted in our 2013 report, we agree that it may be appropriate and advantageous to have multiple projects and agencies address the same research objective. We also agree that the specific projects identified as potentially duplicative would need to be reviewed further to identify actual duplication and believe such a review of these data is important to ensure federal funds are used efficiently and effectively so that informed decisions can be made. Our finding that agencies are funding research in the same research areas highlights how imperative it is that agencies effectively coordinate and monitor each other’s autism research. It was the limited coordination and monitoring that we identified in our prior work that was the basis for our prior recommendations. Based on the comments received, we revised the report to acknowledge the breadth of the research objectives and to emphasis our prior findings as they relate to the need for improved coordination and monitoring. HHS also commented that staff in its agencies, including NIH, avoid clear and obvious overlap or unnecessary duplication. HHS stated that they do this by utilizing the research project information in their internal database, Information for Management, Planning, Analysis, and Coordination (IMPAC II), which contains detailed pre-award and award data for four HHS agencies— AHRQ, CDC, the Food and Drug Administration, and HRSA—and some research applications and grants of the Department of Veterans Affairs, as well as participating in the IACC. Further, HHS stated that NIH’s internal autism coordinating committee coordinates research internally within NIH. The use of databases and participation in the IACC as a means to coordinate and monitor across agencies is information that we have reported on in this report and our November 2013 report. We continue to believe that these methods are limited. Further, while we appreciate HHS’s comment that it is taking steps to avoid clear and obvious unnecessary duplication, this is not sufficient given the substantial federal investment in this area. From fiscal years 2008 through 2012, agencies awarded funds of about $1.2 billion for autism research, and many funded research in the same areas. Autism is an important and complex public health concern affecting a large number of individuals, which makes it all the more important that scarce federal resources be used efficiently and strategically. Prudent stewardship requires a careful assessment and coordinated effort to look for unnecessary duplication that may be less than obvious. Lastly, HHS stated that our draft report was incorrect in stating that the Web Tool was the primary tool by which potentially duplicative autism research is to be identified. The department stated that IMPAC II database is used by NIH program officers when evaluating research grant applications. Our draft report did not state that the Web Tool was the primary tool used to identify potentially duplicative autism research; however, we revised the report to clarify information presented on the Web Tool. In its comments, Education also stated that the draft report properly acknowledged the health care and educational programs that provide intervention services to young children with autism. Further, the department stated that it stands ready to work with HHS as HHS implements the Autism CARES Act and that a significant body of research is still needed to better understand and address the developmental academic needs of students with autism, especially given the great variations across the autism spectrum and the range of student learning needs. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, Secretary of the Department of Defense, the Secretary of the Department of Education, the Director of the National Science Foundation, and to other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We examined the intervention services that are provided to children with autism through federal health care programs. More specifically, to do this we analyzed fiscal year 2013 fee-for-service claims and managed care encounter data from the Centers for Medicare & Medicaid Services’ (CMS) Medicaid and Statistical Information System (MSIS) for five select states. We also analyzed fiscal year 2014 Department of Defense (DOD) TRICARE military treatment facility and purchased care claims data. At the time we began our review, these were the most recent fiscal years for which CMS’s and DOD’s data were available. Our analysis consisted of the following three steps: (1) state selection for CMS data, as well as assessing data reliability for both our selected states’ and DOD’s data; (2) identification of children with autism; and (3) identification of intervention services for children with autism. Lastly, we describe the limitations of our methods. We selected five states to include in our analysis of health care data— Delaware, Georgia, Illinois, Kentucky, and Minnesota. We chose these five states based on the availability and reliability of their reported Medicaid and State Children’s Health Insurance Program (CHIP) data in MSIS. Specifically, at the time we began our review, there were 35 states that had validated fiscal year 2013 MSIS data from which we could choose. To determine which of these 35 states to include in our review, we reviewed reports with information on states use of managed care organizations and the completeness and reliability of states’ managed care encounter data in MSIS. Similarly, we reviewed reports on states’ use of behavioral health organizations to provide certain services—since those organizations may be used to provide some of the intervention services included in our review—and the extent to which states’ data in MSIS on services provided by behavioral health organizations is reliable and complete. Lastly, we reviewed reports with information on the completeness and reliability of states’ CHIP data in MSIS. We discussed the reliability and usability of the five states’ data with knowledgeable officials from CMS, its contractor responsible for processing Medicaid and CHIP data reported from states (Mathematica Policy Research), and selected state officials. We discussed the reliability and usability of the TRICARE data with knowledgeable DOD officials. We performed data checks, such as examining the data for missing values and obvious errors, to test the internal consistency and reliability of the data. These data were found to be reliable for our purposes. Based on eligibility information in the MSIS eligibility file and the TRICARE beneficiary file, we restricted our study to children who were (1) age 1 through age 5 at the beginning of the fiscal year, and (2) enrolled in one of these programs for at least 10 months. We limited our review to only those children we identified with autism. For purposes of this report, we considered a child to have autism if the child had at least one claim with an autism diagnosis code at any point in the fiscal year. We focused our review on non-institutional services contained within the MSIS other services file and the TRICARE non-institutional file. There is no standard set or list of procedure codes that are used by providers to report the provision of intervention services to children with autism. Therefore, we developed a list of procedure codes that could closely reflect the provision of intervention services to young children with autism. To do this, we took the following four steps. 1. We reviewed documentation and interviewed federal agency officials, as well as officials from non-federal entities, such as the American Academy of Pediatrics, to gather general information on typical interventions for young children with autism. 2. We identified the procedure codes found on claims with an autism diagnosis code in our dataset and examined the definitions of these codes. 3. We discussed procedure codes relevant to providing interventions to children with autism with representatives from the following seven professional associations: American Academy of Child & Adolescent Psychiatry; American Physical Therapy Association; American Occupational Therapy Association; American Psychiatric Association; American Psychological Association; American Speech, Hearing, and Language Association; and Association of Professional Behavioral Analysts. 4. In recognition of variations in the practice of medicine across geographic regions, we gathered information from all five selected states about their use of certain procedure codes to determine if the use of these codes typically reflected the delivery of an intervention service for autism in their respective states. Based on the information gathered, we identified a list of procedure codes that appeared to reflect common autism-related interventions. The intervention services in our review also include related diagnostic or evaluation services. When generating the list of procedure codes, it was not possible for us to parse out intervention services from assessment- type services because, in general, an assessment is needed in order to determine the best intervention approach and to adjust that approach over time. Further, we heard from the association officials we interviewed that providers are frequently assessing at the same time they are providing an intervention. For reporting purposes, we categorized the procedure codes we identified into five broad categories. 1. Behavioral, which includes psychiatry services, health and behavioral assessments and intervention services, and applied behavior analysis. 2. Evaluation and management, which includes central nervous system tests, office or other outpatient visits or consultations, and medical team conferences to diagnose and develop intervention strategies. 3. Home care and skills training, which includes teaching skills to the child and the child’s family to promote the child’s development and independent living. 4. Physical and occupational therapy, which includes the provision of therapies to, for example, teach a child how to develop movements involved with walking, eating, or communicating. 5. Speech, language, and audiology, which includes evaluation and treatment of speech, language, voice, communication, and auditory processing. To the extent possible, we based our categorization on the American Medical Association’s Current Procedural Terminology codebook, although our categories are broader than those found in this codebook. We asked experts from each of the seven professional associations to comment on our categorization. We received responses from five of the seven associations. Three of the five associations—American Academy of Child & Adolescent Psychiatry, American Physical Therapy Association and Association of Professional Behavioral Analysts—agreed with our categorization. The other two—American Occupational Therapy Association, and American Speech-Hearing-Language Association—were concerned that putting a procedure code in the “behavioral” category, for example, might imply that the code cannot appropriately be used by professionals, such as speech and language pathologists or occupational therapists. This is not the intention of our categorization, nor should our categories be considered as billing advice to be associated with or used for billing purposes. In fact, we found that four of the seven associations stated that their professionals used codes that fall within our “behavioral” category, and five of the seven used codes that fall with our “evaluation and management” category, among others. See table 2 for the procedure codes included in our review and the categories of interventions they fall under for the purposes of our report. Because our Medicaid and CHIP data are from five states, the results of our analyses of these data are not generalizable across all states. The intervention services in our review only reflect services identified by the procedure codes included in our review, and as a result, the amount of services we report may be an undercount. Based on our methodology, we believe the list of procedure codes is appropriate and fairly represents interventions provided to children with autism. Because we included all claims of children identified with autism (with relevant procedure codes) due to the potential for inconsistency in the diagnosis codes included on a claim, some of the services in our review may not have been provided for, or relate to, the treatment of autism. In these cases, the amount of services we report may be over reported. The Department of Education requires states to report the number of children ages 3 through 5 enrolled in the states’ special education program by disability category—such as autism. The number of children reported in the autism category is likely less than the actual number of children with autism being served by states. Children with autism may be categorized under other categories including a general disability category, “developmental delay,” or the category, “speech and language impairment,” because communication difficulties are a typical symptom of autism. The data provided to Education indicate that approximately 66,000 children ages 3 through 5 under the autism category received services in school year 2014-2015, as shown in table 3. We examined certain expenditures for the provision of intervention services to children ages 1 through 5 identified with autism and enrolled in the Centers for Medicare & Medicaid Services’ Medicaid program, the State Children’s Health Insurance Program (CHIP), and the Department of Defense’s (DOD) TRICARE program. We examined fiscal year 2013 Medicaid and CHIP expenditure data in fee-for-service claims for five states: Delaware, Georgia, Illinois, Kentucky, and Minnesota. Fee-for-service claims were about 87 percent of the total intervention services provided to children identified with autism that we reviewed, with managed care encounters making up the remaining portion of services provided. See table 4 for the expenditures on intervention services provided to children identified with autism enrolled in Medicaid and CHIP, by service category. We examined fiscal year 2014 expenditures for TRICARE purchased care claims. Purchased care claims were about 96 percent of the total intervention services provided to children identified with autism that we reviewed, with military treatment facility claims comprising the remainder of services provided. We examined the expenditures for those young children who were enrolled in DOD’s autism demonstrations—which offered increased access to applied behavior analysis (ABA) to servicemembers’ family members diagnosed with autism—as well as those who were not enrolled in the demonstration. See table 5 for expenditures on intervention services provided to children identified with autism enrolled in the TRICARE autism demonstrations, and those who received such services but were not enrolled in the demonstration, by service category. In addition to the contact named above, Geri Redican-Bigott, Assistant Director; Deirdre Gleeson Brown; Jackie Hamilton; Giselle Hicks; Drew Long; Brandon Nakawaki; Vikki Porter; Sarah Resavy; and Eric Wedum made key contributions to this report. Federal Autism Research: Updated Information on Funding from Fiscal Years 2008 through 2012. GAO-15-583R. Washington, D.C.: June 30, 2015. Federal Autism Activities: Funding and Coordination Efforts. GAO-14-613T. Washington, D.C.: May 20, 2014. Federal Autism Activities: Better Data and More Coordination Needed to Help Avoid the Potential for Unnecessary Duplication. GAO-14-16. Washington, D.C.: November 20, 2013. Combating Autism Act: HHS Agencies Responded with New and Continuing Activities, Including Oversight. GAO-13-232. Washington, D.C.: February 27, 2013. Federal Autism Activities: Funding for Research Has Increased, but Agencies Need to Resolve Surveillance Challenges. GAO-06-700. Washington, D.C.: July 19, 2006. Special Education: Children with Autism. GAO-05-220. Washington, D.C.: January 14, 2005.
Research has shown that early intervention can greatly improve the development of a child with autism. Children with disabilities—including children with autism—can receive intervention services through the Individuals with Disabilities Education Act. Low income children may also receive intervention services through Medicaid or CHIP, health care programs overseen at the federal level by the Centers for Medicare & Medicaid Services and administered by the states. Children of servicemembers may receive services through TRICARE, DOD's health care program. GAO was asked to review federal autism efforts. This report describes (1) how federal agencies encourage early autism identification and interventions, and (2) the intervention services provided by federal education and health care programs. It also (3) examines steps taken by HHS and federal agencies to improve research coordination. GAO collected information on education programs in five states that were selected for size, activities, and variation in geographic location. GAO analyzed health care program data: fiscal year 2014 TRICARE data and fiscal year 2013 Medicaid and CHIP data—the most recent data available at the time of the review—from another five states selected based on the availability of reliable data. GAO also monitored the implementation of its 2013 recommendations to improve autism research coordination. Education and HHS provided comments on a draft of this report and disagreed that there is potential for unnecessary duplication. GAO continues to believe improved coordination is needed. Federal agencies have taken various actions to encourage early autism identification and interventions, such as specifically soliciting research in these areas. From fiscal year 2012 through fiscal year 2015, the departments of Defense (DOD), Education, and Health and Human Services (HHS), awarded about $395 million for research on early identification and interventions for autism. Federal programs provide a variety of intervention services to young children with autism. When examining the education programs administered by five states and DOD, GAO found that specific actions were taken to help respond to the individual intervention needs of children with autism. Children enrolled in federal health care programs—Medicaid, the State Children's Health Insurance Program (CHIP), or TRICARE—received a variety of interventions. For example, GAO identified about 8,200 young children with autism in five states enrolled in Medicaid or CHIP and found that speech, language, and audiology services were the most common overall; however, the types of services commonly received varied, depending on the age of the child. HHS has recently taken actions required by the Autism Collaboration, Accountability, Research, Education, and Support Act of 2014 (Autism CARES Act) that could help coordinate federal autism research and implement GAO's prior recommendations. For example, in April 2016, HHS designated an autism coordinator to oversee national autism research, services, and support activities. In 2013, GAO reported that there was limited coordination among agencies. This was especially concerning because GAO also found that 11 federal agencies funded autism research in the same areas—resulting in the potential for unnecessary duplication. At that time, GAO recommended that HHS improve the data it collects on autism research and that federal agencies develop methods to monitor and coordinate this research. GAO believes that HHS's continued fulfillment of certain provisions in the Autism CARES Act could help the department implement GAO's 2013 recommendations.
The Columbia River Basin is North America’s fourth largest, draining about 258,000 square miles and extending predominantly through the states of Washington, Oregon, Idaho, and Montana and into Canada. (See fig. 1.) It contains over 250 reservoirs and about 150 hydroelectric projects, including 18 dams on the Columbia River and its primary tributary, the Snake River. The Columbia River Basin provides habitat for many species including steelhead and four species of salmon: Chinook, Chum, Coho, and Sockeye. One of the most prominent features of the Columbia River Basin is its population of anadromous fish, such as salmon and steelhead, which are born in freshwater streams, live there for 1 to 2 years, migrate to the ocean to mature for 2 to 5 years, and then return to the freshwater streams to spawn. (See fig. 2.) Salmon and steelhead face numerous obstacles in their efforts to complete their life cycle. For example, to migrate past dams, juvenile fish must either go through the dams’ turbines, go over the dams’ spillways, use the installed juvenile bypass systems, or be transported around the dams in trucks and barges. Each passage alternative has associated risks and contributes to the mortality of juvenile fish. Figure 3 shows one of the trucks used to transport juvenile fish around the dams. To return upstream to spawn, adults must locate and use the fish ladders provided at the dams. Once adults make it past the dams, they often have to spawn in habitat adversely affected by farming, mining, cattle grazing, logging, road construction, and industrial pollution. Figure 4 shows a bypass system for juvenile fish migrating downstream and a fish ladder for adult fish returning upstream. Reservoirs formed behind the dams cause problems for both juvenile and adult passage because they slow water flows, alter river temperatures, and provide habitat for predators, all of which may result in increased mortality. Other impacts, such as ocean conditions and snow pack levels, also affect both juvenile and adult mortality. For example, an abundant snow pack aids juvenile passage to the ocean by increasing water flows as it melts. Given the geographic range and historical importance of salmon and steelhead in the Columbia River Basin, local governments, industries, and private citizens are concerned about the species’ recovery. For example, some Indian tribes living in the basin consider salmon to be part of their spiritual and cultural identity, and fishing is still the preferred livelihood of many tribal members. Treaties between individual tribes and the federal government acknowledge the importance of salmon and steelhead to the tribes and guarantee tribes certain fishing rights. Efforts to increase salmon and steelhead stocks in the Columbia River Basin began as early as 1877 with the construction of the first fish hatchery. Now, states, tribes, and the federal government operate a series of fish hatcheries located in the Columbia River Basin. Historically, hatcheries were operated to mitigate the impacts of hydropower and other development and had a primary goal of producing fish for commercial, recreational, and tribal harvest. However, hatcheries are now adjusting their operations to ensure that they support recovery or at least do not impede the recovery of listed species. As dams were built in the 1900s, attempts were made to minimize their impacts by installing fish ladders and bypass systems to help salmon and steelhead migrate up and down the rivers. In the 1980s, several other actions were taken to increase salmon and steelhead populations, including: (1) a treaty between the United States and Canada limiting the ocean harvesting of salmon; (2) the passage of the Pacific Northwest Electric Power Planning and Conservation Act (P.L. 96-501), which called for the creation of an interstate compact to develop a program to protect, and enhance fish and wildlife affected by hydropower development in the Columbia River Basin and mitigate the effects of development; and (3) the beginning of major state, local, and tribal efforts to address habitat restoration through watershed plans. None of these efforts proved to be enough, however, and in the 1990s, 12 salmon and steelhead populations were listed as threatened or endangered under the ESA, resulting in the advent of intensified recovery actions. The 12 listed populations are Snake River Fall-run Chinook salmon, Snake River Spring/Summer-run Chinook salmon, Lower Columbia River Chinook salmon, Upper Willamette River Chinook salmon, Upper Columbia River Spring-run Chinook salmon, Snake River Sockeye salmon, Middle Columbia River steelhead, Upper Willamette River steelhead; Upper Columbia River steelhead Lower Columbia River steelhead, and Columbia River Chum salmon. Eleven federal agencies are involved in the recovery of salmon and steelhead in the Columbia River Basin. The federal agencies must comply with the missions and responsibilities set out in their authorizing legislation while also protecting salmon and steelhead under the ESA. Other entities, such as states, tribes, local governments, and private interest groups are also involved in the recovery effort. To facilitate communication and coordination between the federal agencies and other entities, a network of over 65 groups has been formed. NMFS is responsible for leading the recovery effort for salmon and steelhead in the Columbia River Basin. NMFS, among other things, is responsible for (1) identifying and listing threatened and endangered salmon and steelhead populations, (2) preparing recovery plans for listed salmon and steelhead populations, and (3) consulting with other agencies to ensure that their planned actions do not further jeopardize the listed populations of salmon and steelhead. The other 10 agencies involved in the recovery are the 3 that are responsible for operating the dams and selling the electric power they produce (action agencies), the 3 that manage natural resources in the Columbia River Basin (natural resource agencies), and the 4 that carry out various other actions that affect the resources of the basin (other agencies). The U.S. Army’s Corps of Engineers (Corps), the Department of the Interior’s Bureau of Reclamation (BOR), and the Department of Energy’s Bonneville Power Administration (Bonneville) are the 3 action agencies involved in recovery efforts. The Corps is responsible for designing, building, and operating civil works projects to provide electric power, navigation, flood control, and environmental protection. The Corps operates 12 major dams on the Columbia and Snake Rivers that have direct relevance to salmon and steelhead (Bonneville, The Dalles, John Day, McNary, Ice Harbor, Lower Monumental, Little Goose, Lower Granite, Chief Joseph, Dworshak, Albeni Falls, and Libby). BOR is responsible for designing, constructing, and operating water projects in the 17 western states for multiple purposes, including irrigation, hydropower production, municipal and industrial water supplies, flood control, recreation, and fish and wildlife. BOR operates two major dams (Grand Coulee and Hungry Horse), as well as over 50 smaller dams in the Columbia River Basin and is responsible for reducing any detrimental effects that such operations may have on the survival of salmon and steelhead. For example, BOR dams store water for irrigation, and BOR installs screens over irrigation canal entrances to prevent salmon and steelhead from entering and later dying when the water is used and the canals dry up. Bonneville is responsible for providing transmission services and marketing the electric power generated by the Corps and BOR dams in the Federal Columbia River Power System (FCRPS). In doing so, it is also obligated by the Pacific Northwest Electric Power Planning and Conservation Act (Northwest Power Act) of 1980 to provide equitable treatment to fish and wildlife along with the other purposes for which FCRPS is operated. The Department of the Interior’s Bureau of Land Management (BLM) and U.S. Fish and Wildlife Service (FWS) and the Department of Agriculture’s U.S. Forest Service are the natural resource agencies involved in recovery efforts. The overall mission of the natural resource agencies is to manage their lands for multiple purposes, such as grazing, timber, recreation, and fish and wildlife conservation. BLM administers 262 million acres of public lands, primarily in 12 western states, and about 300 million additional acres of subsurface mineral resources. Its mission is to sustain the health, diversity, and productivity of the public lands for the use and enjoyment of present and future generations. BLM manages a wide variety of resources, including energy and minerals, timber and forage, wild horse and burro populations, fish and wildlife habitat, wilderness areas, and archaeological and other natural heritage values. While conducting its activities, BLM is required by the ESA to avoid actions that would jeopardize the continued existence of listed salmon and steelhead or adversely modify or destroy critical habitat. Consequently, projects are designed and operated to comply with the ESA. An example is planting trees and vegetation to reduce erosion and to provide shade to cool streams. FWS works with other entities to conserve, protect, and enhance fish, wildlife, and plants. It is chiefly responsible for implementing the ESA for terrestrial species, migratory birds, certain marine mammals, and certain fish. FWS operates or funds 37 hatchery facilities in the basin which, along with other purposes, assist in the recovery of listed populations of salmon and steelhead. It also operates three fish health centers and one fish technology center in the basin, which provide the hatcheries with technical support and health screenings of fish. Other conservation efforts include habitat protection and restoration, harvest management, and recommending hydropower operations that will benefit salmon and steelhead. The Forest Service manages 191 million acres of national forests and grasslands nationwide under the principles of multiple use and sustained yield, ensuring that lands will be available for future generations. The multiple uses include outdoor recreation, rangeland, timber, watershed, and fish and wildlife. Like BLM, under the ESA, the Forest Service must ensure that its actions, such as timber harvesting and road construction, are not likely to jeopardize the continued existence of listed species or degrade their critical habitat. The Environmental Protection Agency (EPA), the Department of Agriculture’s Natural Resources Conservation Service (NRCS), the Department of the Interior’s U.S. Geological Survey (USGS) and Bureau of Indian Affairs (BIA) are the four other agencies involved in recovery efforts. Collectively, these agencies are responsible for a variety of actions and endeavors to incorporate the needs of salmon and steelhead into the requirements of their primary missions. EPA protects human health and safeguards the natural environment by protecting the air, water, and land. Under the Clean Water Act, EPA, among other things, works with the states to develop water quality standards that accommodate the needs of salmon and steelhead. NRCS is responsible for helping farmers, ranchers, and other landowners develop and carry out voluntary efforts to protect the nation’s natural resources. NRCS works with landowners to promote better land use management and resource conservation, which helps improve water quality and habitat for salmon and steelhead. USGS is responsible for conducting objective scientific studies and providing information to address problems dealing with natural resources, geologic hazards, and the effects of environmental conditions on human and wildlife health. It provides research on various issues, such as fish diseases and fish passage, which benefit salmon and steelhead. BIA’s principal responsibilities are to encourage and assist Native Americans to manage their own affairs under the trust relationship with the federal government. Conserving fish and wildlife and maintaining traditional fishing rights are among the trust responsibilities that BIA has with the Indian tribes. In addition, all agencies are responsible for furthering the purposes of the ESA by carrying out programs for the conservation of listed species. Selected major laws affecting the operations of the 11 agencies are listed in appendix III. In fulfilling their responsibilities, agencies sometimes encounter competing priorities that involve making trade-offs. For example, the Northwest Power Act requires the protection, mitigation, and enhancement of fish and wildlife while ensuring an adequate, efficient, economical, and reliable power supply for the Pacific Northwest. During the drought of 2001, Bonneville found it difficult to meet its responsibilities under both the ESA and the Northwest Power Act. As a result, Bonneville, in consultation with other federal agencies, determined that in order to maintain an adequate and reliable power supply during the declared power emergencies, available water had to be sent through the turbines to generate electricity and as such could not be spilled (released) over the dams to aid juvenile fish passage. Significantly reducing the amount of water spilled over the dams may affect the survival rates of some juvenile populations, which may in turn ultimately affect the number of adult salmon and steelhead returning to spawn in the future. Figure 5 shows water being released at Bonneville Dam to aid fish passage. In addition to federal agencies, many state and local governments, Indian tribes, private interest groups, and private citizens are involved in the recovery effort. For example, to guide state recovery efforts, Idaho, Montana, Oregon, and Washington have jointly prepared a salmon and steelhead recovery plan referred to as the Governors’ Plan. Other participants in the recovery efforts include local governments, such as the cities of Portland, Oregon, and Yakima, Washington; and local conservation districts like the Asotin County Conservation District in Washington. Tribal entities—the Confederated Tribes of the Umatilla Indian Reservation, Nez Perce Tribe, Confederated Tribes of the Warm Springs Reservation, Shoshone-Bannock Tribes of the Fort Hall Reservation, Confederated Tribes of the Colville Reservation, and Yakama Indian Nation—and private interest groups/organizations like American Rivers, Columbia River Alliance, Ducks Unlimited, and Save Our Wild Salmon, also participate in recovery efforts. Over 65 groups have been formed to help facilitate communication and coordination between the various entities involved in salmon and steelhead recovery. The size and purpose of the groups range from large groups that deal with basinwide concerns to smaller, more narrowly focused ones that deal with local issues. For example, the Federal Caucus, comprising 10 federal agencies having natural resource responsibilities under the ESA, meets to discuss issues and make policy decisions on the implementation of the basinwide strategy that it developed to help recover salmon and steelhead populations. Local groups, such as the Asotin County Conservation District, meet to develop watershed plans and to secure funding for landowners to make water quality and habitat improvements on their property. (See appendix IV for the names, purpose, and meeting frequency of the various groups involved in the recovery effort.) The 11 federal agencies estimate that they expended almost $1.8 billion (unadjusted for inflation) from fiscal year 1982 through fiscal 1996 and about $1.5 billion (in 2001 dollars) from fiscal year 1997 through fiscal 2001 on efforts specifically designed to recover Columbia River Basin salmon and steelhead. The $1.5 billion expended in the last 5 fiscal years consisted of $968.0 million that federal agencies expended directly and $537.2 million that the federal agencies received and then provided to nonfederal entities, such as states and Indian tribes. The four agencies listed below accounted for $854.0 million (about 88 percent) of the $968.0 million spent by the federal agencies in the last 5 fiscal years. The Corps expended about $589.7 million primarily on projects such as improving juvenile bypass systems and adult fish ladders at the dams. The Forest Service expended about $105.7 million primarily on ESA consultations and projects such as habitat improvement, land acquisition, watershed restoration, in-stream habitat improvement, and improving passage at culverts and small dams that block salmon and steelhead passage. FWS expended about $96.7 million primarily on salmon and steelhead hatcheries. BOR expended about $61.9 million, primarily on Columbia and Snake River salmon and steelhead recovery projects on several segments of the Yakima River Basin water enhancement project—including its tributary, water acquisition, water augmentation, and habitat acquisition programs. The other seven agencies expended the remaining $114 million. Table 1 shows each agencies’ total salmon- and steelhead-specific expenditures for each fiscal year from 1997 through fiscal 2001. (Detailed expenditure data for each agency are provided in appendix V.) In addition to the $968.0 million in specific federal expenditures, five federal agencies provided nonfederal entities with about $537.2 million for specific salmon and steelhead recovery efforts. These funds were either federally appropriated or, in the case of Bonneville, came from revenues received from the sale of electricity. For example, as shown in table 2, Bonneville provided nonfederal entities with over $378 million in power receipts during the 5-year period. Federal funds provided to nonfederal entities may contain certain requirements or restrictions. For example, federal funds provided by NMFS under the Pacific Salmon Recovery Fund require a 25 percent state or local matching contribution. The nonfederal entities receiving the federally provided funds include the states of Idaho, Montana, Oregon, and Washington; tribes, such as the Nez Perce and Yakama; government consortium groups, such as the Columbia Basin Fish and Wildlife Authority and the Northwest Power Planning Council (an interstate compact with two representatives from each of the states of Idaho, Montana, Oregon and Washington); and fish conservation organizations, such as Long Live the Kings. About two-thirds or $353.7 million of the $537.2 million, was provided to the states and tribes. (See table 3.) In addition to the almost $1.5 billion that federal agencies expended or provided nonfederal entities with for specific salmon and steelhead recovery actions, federal agencies estimated that they expended $302 million (in 2001 dollars) in the last 5 fiscal years on actions that benefited, but were not specifically directed at, salmon and steelhead—that is, nonspecific salmon and steelhead expenditures. For example, NRCS provides technical assistance and funding for private land conservation. Collectively, these actions improve stream flows, habitat, and water quality, which has a positive effect on fish. Also, USGS performs research that evaluates the effect of diet, growth regime, and environment on the development of salmon. This research, however, is for all salmon species, not just those in the Columbia River Basin. Agencies’ estimates of nonspecific salmon and steelhead expenditures are included in table 4. Federal agencies have taken many actions to recover salmon and steelhead. Although agency officials generally view these actions as resulting in higher numbers of returning adult populations and improving the conditions for recovery, the precise extent of their effects on salmon and steelhead are not well understood. A number of factors make it difficult to isolate and quantify the effects of these actions, including large natural yearly fluctuations in the salmon and steelhead populations, weather and ocean conditions, and the length of time it takes for some project benefits to materialize. However, federal agencies are confident that recovery actions are having positive effects and have resulted in higher numbers of returning adult salmon and steelhead than would have occurred otherwise. Federal agencies have taken many actions aimed at salmon and steelhead recovery. For example, NMFS listed 12 populations of salmon and steelhead under the ESA and issued numerous final biological opinions covering the operation of FCRPS and forest and land management; sport, commercial, and tribal harvest; hatchery operations; and irrigation operations in the Yakima, Umatilla, and Snake River basins. In conjunction with the Federal Caucus, NMFS helped develop the All-H Strategy (hydropower, hatcheries, harvest, habitat) for the recovery of salmon and steelhead. NMFS has also engaged in extensive public outreach efforts, conducted salmon and steelhead studies, and discussed management strategies with other agencies on factors affecting salmon and steelhead mortality. The action agencies’ (the Corps, BOR, and Bonneville) recovery efforts have been primarily focused on the dams and water projects. For example, the Corps constructed a new bypass system at Bonneville Dam’s second powerhouse that Corps officials expect will increase juvenile survival by 6 to 13 percent. The Corps has also installed fish screens to guide juvenile fish to the bypass systems and away from the turbines. Figure 6 shows a fish screen at John Day Dam in Oregon. BOR officials stated that it has begun implementing and will implement all of those actions that apply to it in the FCRPS biological opinion. For example, among other things, it has designed and constructed fish screens and fish passage facilities for irrigation diversions on its projects. Bonneville contracts directly with federal, state, tribal and other entities to protect, mitigate, and enhance fish and wildlife in the Columbia River Basin in addition to managing FCRPS for fish as well as power. For example, Bonneville has provided the Yakama Indian Nation with funding to construct and operate a tribal hatchery and has provided federal, state, tribal, and nonfederal entities with funding to monitor juvenile fish populations; and to improve and acquire additional salmon and steelhead habitat. The natural resource agencies’ (Forest Service, FWS and BLM) recovery actions have been primarily aimed at implementing an aquatic conservation strategy that consists of aquatic and riparian habitat protection; fish distribution; watershed restoration; land acquisition; coordination with other agencies, tribal governments, and so forth; and monitoring and evaluation. For example, in the past 5 years, the Forest Service improved over 2,000 miles of stream banks and 9,000 acres of riparian area using various methods, such as plantings to reduce erosion and placing logs in streams to provide deeper pools. FWS, in conjunction with the Confederated Tribes of the Umatilla Indian Reservation and the Oregon Department of Fish and Wildlife, transferred 350,000 salmon from a hatchery to the Umatilla River to increase local returns. BLM habitat improvement projects include riparian plantings, such as 50 acres in the Grande Ronde River Basin, and erosion control activity, such as the Hayden Creek road sediment reduction project. The other agencies (EPA, NRCS, USGS and BIA) have initiated a wide range of recovery actions. For example, EPA developed a temperature model for the Columbia and Snake rivers that provides a foundation for making decisions on hydroelectric operations. During the last 5 years, NRCS worked with over 23,000 individual landowners to develop resource management plans for 4.8 million acres of land and to restore over 10,000 acres of wetlands. USGS prepared an annual report quantifying juvenile salmon and steelhead predation by the Northern Pikeminnow. BIA provided tribal fish commissions, including the Columbia River Inter-Tribal Fish Commission, with funding to address certain provisions of the Pacific Salmon Treaty. Additional examples of salmon and steelhead recovery actions taken by NMFS, the action agencies, the natural resource agencies, and the other agencies are listed in appendix VI. The data to isolate and quantify the effects of recovery efforts on returning fish populations are generally not available because of numerous factors. These factors include large natural yearly fluctuations in salmon and steelhead populations, changing weather and ocean conditions, the length of time it takes for project benefits to materialize, and the multiyear life cycles of the fish. Returning salmon and steelhead populations have fluctuated widely from year to year. For example, over the past 25 years, annual adult returns for all ESA listed and unlisted salmon and steelhead counted at Bonneville Dam, the first dam on the Columbia River, averaged 660,000, but counts for individual years varied widely. As shown in figure 7, the number of returning adults went from 638,000 in 1991, down to 411,000 in 1995, and up to 1,877,000 in 2001. During the same time period, total ESA listed and unlisted adult salmon and steelhead returns counted at Lower Granite Dam, the last dam that adult fish encounter on the Snake River before entering Idaho, averaged about 116,000. But like counts at Bonneville, the counts at Lower Granite for all salmon and steelhead fluctuated widely, as shown in figure 8. Similar fluctuations occurred for individual ESA—listed salmon and steelhead populations. For example, at Lower Granite Dam, an average of 72—ESA listed Snake River Sockeye salmon have returned annually for the past 25 years, but actual counts varied from 8 returning in 1991, down to 3 returning in 1995, up to 299 returning in 2000, and down to 36 returning in 2001. Figure 9 shows the counts of returning adult Snake River Sockeye salmon at Lower Granite Dam. The 25-year averages for Bonneville, Lower Granite, and Snake River Sockeye were greatly influenced by the relatively higher numbers of adults returning to the basin in 2000 and 2001. For example, adult returns in 2000 and 2001 represented 17 percent of all returning adults counted at Bonneville Dam over the past 25 years and 21 percent of returning adults counted at Lower Granite Dam in the same time period. Similarly, adult returns in 2000 and 2001 represented 18 percent of returning adult Snake River Sockeye. (Actual counts for listed and unlisted salmon and steelhead at Bonneville and lower Granite and listed Snake River Sockeye at Lower Granite are displayed in appendix VII.) Although the precise reasons for the large number of adult returns in 2000 and 2001 are unknown, federal officials stated that the relatively high returns might be largely attributable to favorable ocean conditions, which mask the benefit of actions they have taken. Additionally, they believe the above–average snow pack in 1996, 1997, 1998 and 1999, may have contributed to higher juvenile survival rates in the freshwater during those years because the runoff increased water flows in tributaries and the mainstem Columbia and Snake rivers. Depending on the species, many of these juveniles would have returned as adults in 2000 and 2001. Cyclical changes in ocean temperatures also affect salmon and steelhead survival. For example, cooler ocean temperatures off the West Coast from 1999 through 2001 increased the number of small fish that salmon feed upon and have likely increased salmon and steelhead survival and contributed to higher returns. The length of the ocean temperature cycle and its relationship to salmon and steelhead survival, however, is not clear. Finally, salmon and steelhead generally have a 3- to 5-year spawning, rearing, and maturation cycle, so it takes years before the benefits of some actions materialize. For example, improving bypass facilities at the dams reduces juvenile salmon and steelhead mortality, but their ultimate ability to return to spawn depends on many other factors, such as the availability of food in the ocean to allow them to mature; the avoidance of predators such as birds, marine mammals, other fish, fishermen; and favorable passage conditions when they return upriver to spawn. However, actions that increase reproduction, improve passage and habitat conditions, reduce erosion and pollution, use hatcheries for recovery, ensure careful harvest management, and educate the public all improve salmon and steelhead survival rates. While they cannot quantify or isolate the benefits of individual actions, agencies’ officials are confident that the composite recovery actions taken to date are having positive effects, generally improving the conditions for freshwater survival and ultimately resulting in higher numbers of returning adult salmon and steelhead than would have occurred otherwise. For example, NMFS estimates that juvenile survival rates for Snake River spring/summer Chinook salmon increased from 10 to 13 percent during the 1970s to 31 to 59 percent after fish passage improvements were made at the dams during the 1990s. These are estimates, however, with no quantification of the actual number of returning adult salmon and steelhead. The number of returning adults is important because other studies have shown that even after successfully passing the dams, using bypass facilities increases fish mortality downstream. We provided the Department of Agriculture (Forest Service and NRCS), the Department of Commerce (NMFS), the Department of Defense (Corps), the Department of the Interior (BIA, BOR, BLM, FWS, and USGS), Bonneville, and EPA with a draft of this report for review and comment. We received written comments from all agencies except the Corps and EPA, and are including these comments in appendies VIII through XI in this report. The Corps provided oral comments chiefly of an editorial nature, which we have incorporated into the report as appropriate. EPA reviewed the report and had no comments. The responding agencies, with the exception of Bonneville, commented that the report accurately portrayed the roles of the agencies, their expenditures, and recovery actions. These agencies also provided clarifications on several technical points that have been included in the report as appropriate. Bonneville took issue with three points regarding our report. First, Bonneville commented that the report does not fully reflect its role in funding salmon and steelhead recovery efforts. For example, Bonneville stated that the report does not explain that it reimburses the U.S. Treasury for most of the expenditures for capital improvements at the Corps’ and BOR’s hydroelectric projects as well as operation and maintenance costs at these projects and at FWS’s Lower Snake River Compensation Plan hatcheries. We agree that Bonneville is a major supplier of salmon and steelhead recovery moneys and clarifications were made in the report to reflect its role. However, we were not asked to provide information on the source of funds for salmon and steelhead recovery efforts but rather how much the agencies expended on such efforts. Therefore, the report reflects the funds Bonneville is referring to as expenditures by other federal agencies, such as, the Corps, BOR, and FWS. Second, Bonneville commented that the report does not fully describe that the funds it provides other agencies with are from ratepayer receipts and, as a result, much of the salmon and steelhead recovery expenditures shown in the report are paid for by those that buy the electric power the dams generate. While the report notes that ratepayer receipts fund these expenditures, we have added additional details on the source of the funds that Bonneville uses to cover agencies’ expenditures and how Bonneville reimburses the U.S. Treasury for agencies’ expenditures for capital and operation and maintenance costs. Finally, Bonneville expressed concern that we did not include the cost of replacement power and lost power revenues in our expenditure totals. We did not include these costs because these costs do not reflect expenditures for actual recovery actions and determining these costs is difficult to derive, since replacement power and lost revenues could result from other management decisions that are not related to salmon and steelhead recovery. We conducted our work from July 2001 through June 2002 in accordance with generally accepted government auditing standards. Appendix II contains the details of our scope and methodology. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Agriculture, the Secretary of Commerce, the Secretary of Defense, the Secretary of the Interior, the Administrator of EPA, the Administrator of Bonneville, and interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, you can contact me on (202) 512-3841. Key contributors to this report are listed in appendix XII. During the course of our work, agency officials and others brought to our attention two issues that may affect the salmon and steelhead recovery effort: (1) development of a Columbia River basinwide strategic salmon and steelhead recovery plan and annual performance plans to facilitate and track recovery efforts and (2) an Endangered Species Act (ESA) consultation-tracking system to identify and eliminate unnecessary delays to projects that are specifically designed to benefit fish, including salmon, steelhead, and other threatened or endangered species. Although we have not conducted detailed work on these issues, they are summarized as follows. A basinwide strategic recovery plan that identifies overall recovery goals, estimated total costs, and specific agencies’ actions and an annual performance plan that identifies annual funds available and projects to be completed would help the agencies to focus their actions and provide a means to assess overall recovery efforts. The ESA requires that the National Marine Fisheries Service (NMFS) develop and implement a recovery plan for each listed salmon and steelhead species. The ESA requires that this plan include (1) site-specific management actions; (2) objective and measurable criteria that, when met, will result in the species’ delisting; and (3) estimates of the time and cost required to implement the measures and achieve the goal of delisting the species. Because NMFS has not yet developed a recovery plan, the agencies use a variety of plans, strategies, and guidance to direct their recovery efforts. Among others, the guidance that each agency uses includes its own mission plans, NMFS’s biological opinion for its actions that may adversely affect or jeopardize listed species, and the Federal Caucus’ All-H (hydropower, hatcheries, harvest, and habitat) recovery strategy. However, two recent publications, one prepared by a scientific team and the other by a private organization, have raised concerns about the potential success of recovery efforts that follow these plans and strategies, whether they are used individually or combined. The agencies’ officials have also stated that a recovery plan that all entities recognize is needed to help direct their efforts toward those watersheds and actions that can do the most for recovery. NMFS is in the process of developing a basinwide recovery plan for ESA- listed salmon and steelhead, but that plan is several years away from completion. According to NMFS officials, the plan is being developed in phases. The first phase is to identify, among other things, target populations and delisting criteria. The second phase is to identify the actions needed to meet the target populations and delisting criteria. In 2004, NMFS expects a plan of action to be in place for the ESA-listed salmon and steelhead on the lower Columbia River. The plans for the middle and upper Columbia River salmon and steelhead populations are to be completed after 2004, but no specific completion dates have been set. Once a basinwide recovery plan is completed, annual performance plans will be needed to implement it. The Government Performance Results Act of 1993 (GPRA) requires agencies to prepare and monitor annual performance plans to successfully implement their long-range strategic plans. Under GPRA, the annual performance plan serves as the basis for setting annual program goals and for measuring program performance in achieving those goals. The annual performance plan provides a direct link between long-term goals and day-to-day operations. The annual performance plan should contain, among other things, annual goals that can be used to gauge progress toward achieving strategic long-term goals, standards that will be used to assess progress, and information on the funds available to implement the annual performance plan. The Federal Caucus and the President’s Council on Environmental Quality recently started identifying federal appropriations and Bonneville’s power receipts that are available annually for salmon and steelhead recovery. Under the consultation requirements of the ESA, federal agencies must consult with NMFS to determine whether a proposed action that is federally authorized, carried out, or funded is likely to jeopardize the continued existence of any threatened or endangered salmon or steelhead species, or adversely modify or destroy its critical habitat. Unless a longer time period is mutually agreed to by both NMFS and the consulting agency, NMFS has 135 days to make this determination and issue a biological opinion that summarizes its findings. Officials of several other federal agencies have said that the ESA consultation process with NMFS sometimes takes too long and that projects designed to benefit fish, including salmon and steelhead, are delayed or prevented from being completed. For example, Forest Service officials reported that, because of the lengthy ESA consultation process, funding had to be turned back for two road culvert projects. In each case, Forest Service officials concluded that replacing the culverts would open up miles of blocked habitat to fish. After submitting the project consultation packages to NMFS, however, Forest Service officials stated that they waited over a year for a response. Because these projects were to be funded with “one year” money, the long delay resulted in the return of the money without the completion of the projects. BOR officials reported similar problems, stating that a delay in completing consultation risks not only the loss of funds, but can delay projects designed to save fish by at least a year. NMFS officials in the Pacific Northwest stated they were aware of the agencies’ concerns about untimely ESA consultations and provided several reasons why delays may occur, including the recent hiring of a number of NMFS staff who were inexperienced with the consultation process and an increase in the number of consultations. According to NMFS officials, over the past 5 years, in its Habitat Conservation Division, where many consultations occur, the number of staff has increased from 6 to 120. As the new staff acquire experience, officials said the timeliness of consultation should improve. Furthermore, NMFS officials stated that the number of formal consultations involving salmon and steelhead in the basin has almost doubled from 46 in 1997 to 88 in 2001. NMFS officials also said that the agencies’ concerns might be somewhat overstated because agencies often mistakenly assume that the time spent on informal consultation is part of the formal consultation process. Informal consultations, which ranged from 203 in 1997, to 359 in 1999, to 232 in 2001 in the Pacific Northwest, are discussions that take place while NMFS reviews the biological assessment package submitted by an agency for completeness—i.e., inclusion of all the information needed to issue a biological opinion. Because NMFS does not track ESA consultations, we could not verify the magnitude, frequency, and/or causes of any such delays. However, NMFS recognizes the need to track the number, status, and timeliness of consultations and plans to implement a consultation-tracking system in 2002. NMFS officials said they and other agency officials need to know how well the consultation process is working and whether the process is taking so long that federal projects, even those beneficial to salmon and steelhead, are being delayed. We were asked to (1) identify the roles and responsibilities of the federal agencies involved with the recovery of Columbia River Basin salmon and steelhead, (2) determine how much they have spent collectively on recovery efforts, and (3) determine what actions they have undertaken and what they have accomplished. In conducting our work, agency officials and others brought to our attention two issues that may affect the recovery effort: the development of a strategic recovery plan to direct overall recovery efforts along with annual performance plans to implement the strategic plan, and the development of a system to track Endangered Species Act consultations to ensure that recovery projects are not unnecessarily delayed by the consultation process. To identify the roles and responsibilities of the federal agencies involved in salmon and steelhead recovery, we identified 11 federal agencies with significant responsibility for salmon and steelhead recovery in the Pacific Northwest. These agencies were either members of the Federal Caucus or were referred to us by members of the Federal Caucus. We interviewed 123 officials from the 11 agencies, including officials across the various management levels, to determine the role that each agency plays in the recovery effort; the laws and mandates with which each agency must comply while also complying with the ESA; the plans that each agency uses to guide its recovery efforts; the entities with which they coordinate; their membership in groups, such as committees and task forces; agencies’ experiences with the ESA consultation process; and each agency’s opinion of the overall recovery effort to date. We also interviewed officials from the states of Idaho, Oregon, and Washington; the Columbia River Inter-Tribal Fish Commission; individual Indian tribes, and the Northwest Power Planning Council. These interviews were primarily conducted in Seattle, Washington; Portland, Oregon; and Boise, Idaho, but also included smaller communities in eastern Oregon and Washington. In addition to interviews, we reviewed the recovery plans cited in the interviews, previous GAO reports, and other studies and reports either referred to us or discovered during our research. To determine the amount of federal funds the agencies collectively expended on salmon and steelhead mitigation, restoration, and recovery in the Columbia River Basin, we asked each of the 11 agencies to provide us with an estimate of overall salmon and steelhead expenditures for fiscal year 1982 through fiscal 1996 and for detailed expenditure information for fiscal year 1997 through fiscal 2001. We requested that the agencies provide expenditure data in two main categories: (1) expenditures made specifically to benefit salmon and steelhead (specific expenditures) and (2) those that were made for another purpose but also benefited salmon and steelhead (nonspecific expenditures). Within each of these categories, we requested that further detail be provided on how the money was spent. For example, we asked the agencies to identify expenditures by type— projects, research, monitoring, consultation/coordination, litigation or administration. Because the 11 agencies provided us with a combined dollar estimate of expenditures for fiscal year 1982 through fiscal 1996, we did not adjust these estimates to account for inflation. The remaining data supplied for individual fiscal year 1997 through fiscal 2001 have been adjusted to the constant base of 2001 dollars. Because funds used for salmon and steelhead recovery are seldom specifically identified as such, and because each agency has a different accounting system, agency officials were asked to provide actual numbers whenever possible and estimates when specific numbers were not available. In conducting our analysis, we did not independently verify or test the reliability of the expenditure data provided by the agencies. To identify the actions that the agencies have taken and what they have accomplished to recover salmon and steelhead, we obtained fish count data from the Fish Passage Center on the number of adult salmon and steelhead returns to Bonneville and Lower Granite Dams for the past 25 years. In addition, we sent the agencies a data-collection instrument asking them to furnish us with a list of representative actions that they had taken to assist in the recovery effort. We also reviewed accomplishment reports that some of the agencies are required to prepare and compared the data in the reports with what they provided us. In the course of our work, agencies’ officials and others brought to our attention two issues that may affect the recovery effort: the development of a strategic recovery plan to direct overall recovery efforts along with annual performance plans to implement the strategic plan and the development of a system to track ESA consultations to ensure that recovery projects are not unnecessarily delayed by the consultation process. To obtain additional information on these issues, we reviewed (1) the Government Performance Results Act and the ESA; (2) the agencies’ various mission-related mandates and salmon and steelhead recovery strategies and critiques of those plans and strategies; (3) the cross-cutting budget prepared by the Federal Caucus and President’s Council on Environmental Quality; (4) previous GAO reports on restoring the Florida Everglades, GPRA, and ESA consultations; and (5) data requested from the National Marine Fisheries Service on the number and timeliness of consultations conducted in the past 5 years. We performed our work at various locations in the states of Idaho, Oregon, and Washington from August 2001 through June 2002 in accordance with generally accepted government auditing standards. Federal agencies must comply with the requirements of numerous laws, treaties, executive orders, and court decisions while recovering salmon and steelhead. Table 5 lists the selected laws that federal agencies reported as guiding their actions. This appendix shows the committees, task forces, and groups that the federal agencies reported belonging to or whose meetings they attend. Table 6 shows the main committees, task forces, and groups that collaborate on salmon and steelhead recovery, along with their purpose and the frequency of meetings. Table 7 shows the purpose and meeting frequency for other groups with limited functional or geographic roles in salmon and steelhead recovery. During fiscal year 1982 through fiscal 1996, the 11 federal agencies estimated they expended almost $1.8 billion (unadjusted for inflation) in federal funds and Bonneville ratepayer revenues to recover salmon and steelhead in the Columbia River Basin. These agencies also estimate they expended another almost $1.5 billion (in 2001 dollars) from fiscal year 1997 through fiscal 2001. The $1.5 billion consists of $968.0 million expended directly by federal agencies and $537.2 million that the federal agencies received and then provided to nonfederal agencies, such as the states and Indian tribes. The $968.0 million was expended on projects, research studies, monitoring actions, Endangered Species Act consultations, non- ESA consultations on salmon and steelhead issues, litigation involving salmon and steelhead issues, and program administration costs. In addition to the $1.5 billion expended by federal agencies or provided by federal agencies to nonfederal agencies for specific salmon and steelhead recovery actions, federal agencies also estimated that they expended $302 million (in 2001 dollars) in the last 5 fiscal years on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead, such as road improvements that reduce erosion. For the period covering fiscal year 1997 through fiscal 2001, each agency’s expenditures follow. The agencies are listed in alphabetical order. The U.S. Army Corps of Engineers estimated it expended about $769 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. The Corps estimated it expended, for fiscal year 1997 through fiscal 2001, approximately $590 million (in 2001 constant dollars) specifically for salmon and steelhead recovery efforts, as shown in table 8. Of the $590 million, more than $430 million was expended on such projects as construction of juvenile fish bypass facilities, the operation and maintenance of juvenile and adult passage facilities and fish—hauling actions, and the development and installation of fish screens to steer juvenile fish away from the turbines at Bonneville and John Day dams. The Corps also expended over $8.6 million (adjusted to 2001 dollars) on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead. The Corps did not report providing nonfederal entities with any funds. The Bonneville Power Administration estimated that it expended over $487 million (in unadjusted dollars) in power receipts during fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. Bonneville estimated that it expended, for fiscal year 1997 through 2001, over $26 million (in 2001 constant dollars) specifically for salmon and steelhead restoration efforts, as shown in table 9. Of the $26 million, almost $22 million was for contract administration actions. Because Bonneville provides other entities with power receipts for projects, research, and monitoring, it has no expenditures in these categories. The costs shown above include the direct program costs that Bonneville itself has expended on salmon- and steelhead-related activities. In addition to their direct program costs, however, Bonneville uses ratepayer revenues to (1) reimburse the U.S. Treasury for the hydroelectric share of Corps, BOR, and Fish and Wildlife operation and maintenance costs and other noncapital expenditures for fish and wildlife and (2) fund the hydroelectric share of capital investment costs of the Corps’ and BOR’s fish and wildlife projects. Bonneville estimates that its operation and maintenance reimbursements from fiscal year 1997 through fiscal 2001 were $215.1 million and its funding of capital investment for the same time period were $453.9 million. These costs have been included in the totals of the agencies that originally expended them. Bonneville officials indicated that they have also incurred significant nonspecific salmon and steelhead recovery costs. Examples it cited of nonspecific salmon and steelhead costs included a portion of its electricity rate justification case that includes fish protection and programmatic National Environmental Policy Act documents for watersheds. While Bonneville officials stated that these costs are quite extensive, they did not furnish us with any estimates. Finally, Bonneville estimated that it provided state, tribal, and private entities with approximately $379 million (adjusted to 2001 dollars) from fiscal year 1997 through fiscal 2001. The states, tribes, and other entities used these funds for many actions, including habitat restoration and support of the Northwest Power Planning Council’s fish and wildlife program. The Bureau of Indian Affairs (BIA) estimated that it expended more than $41 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. BIA estimated that it expended, for fiscal year 1997 through fiscal 2001, over $360,000 (in 2001 constant dollars) specifically for salmon- and steelhead-recovery efforts, as shown in table 10. Of the $360,000, more than $300,000 was expended on consultation actions, such as attending meetings, other coordination actions, and contract administration. Because BIA provides other entities with funds for projects, research, and monitoring, it did not report any expenditures in these categories. BIA estimated it provided tribal organizations and individual tribes, including the Columbia River Inter-Tribal Fish Commission, the Confederated Tribes of the Warm Springs Reservation, the Nez Perce Tribe, the Confederated Tribes of the Umatilla Reservation, the Yakama Indian Nation, the Colville Tribe, the Fort Hall Shoshone, the Upper Columbia United Tribes, and the Spokane Tribe, with over $29 million (adjusted to 2001 dollars) during fiscal year 1997 through fiscal 2001. BIA also expended more than $25,000 (adjusted to 2001 dollars) on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead. The Bureau of Land Management estimated it expended that more than $22 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996, on actions in the Columbia River Basin to benefit salmon and steelhead. BLM estimated that it expended, for fiscal year 1997 through fiscal 2001, approximately $12 million (in 2001 constant dollars) specifically for salmon- and steelhead-recovery efforts, as shown in table 12. Of the $12 million, more than $7.5 million was expended on such projects as the Fishermen’s Bend, Eaton, and Sandy River Corridor land purchases; Hill’s Creek road decommissioning and culvert removal; Lemhi riparian habitat conservation, and the Hayden Creek road sediment reduction project and other monitoring activities. BLM also expended over $14 million (adjusted to 2001 dollars) on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead. BLM provided nonfederal entities with $136,000. Bureau of Reclamation The Bureau of Reclamation estimated that it expended over $144 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. BOR estimated that it expended, for fiscal year 1997 through fiscal 2001, almost $62 million (in 2001 constant dollars) specifically for salmon- and steelhead-recovery efforts, as shown in table 12. Of the $62 million, more than $58 million was expended on Columbia and Snake River salmon- and steelhead-recovery projects and on several segments of the Yakima River Basin water enhancement project—including its tributary program, water acquisition program, water augmentation program, and habitat acquisition program. Of the $58 million, approximately $27 million was expended on operations and maintenance of fish screen facilities in the Yakima River Basin. BOR also expended over $10 million (adjusted to 2001 dollars) on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead. BOR did not report providing nonfederal entities with any funds. The Environmental Protection Agency estimated that it expended no funds from fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. EPA estimated that it expended, for fiscal year 1997 through fiscal 2001, $67,000 (in 2001 constant dollars) specifically for salmon- and steelhead-recovery efforts, as shown in table 13. Of the $67,000, $47,000 was expended on the salaries of those participating in ESA consultation actions and the remainder on other meeting and coordination actions. EPA estimated that it had no expenditures for projects, research, or monitoring. EPA identified no funds that it provided nonfederal entities with nor did it identify any funds expended on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead. The U.S. Fish and Wildlife Service estimated that it expended over $182 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. FWS estimated that it expended, for fiscal year 1997 through fiscal 2001, almost $97 million (in 2001 constant dollars) specifically for salmon- and steelhead- recovery efforts, as shown in table 14. Of the $97 million, more than $78 million was expended on such projects as the Abernathy Fish Technology Center, the Kooskia National Fish Hatchery, the Little White Salmon/Willard National Fish Hatchery, the Lower Snake River Compensation Plan, the Lower Columbia River Fish Health Center, and the Mid-Columbia River Fishery Resources Office. FWS also estimated it provided state and tribal entities with over $47 million (adjusted to 2001 dollars) from fiscal year 1997 through fiscal 2001. The states and tribal entities used these funds for hatchery improvement studies, estuary research initiatives, and salmon reproductive biological research. Finally, FWS expended another $4.4 million (adjusted to 2001 dollars) on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead. The U.S. Forest Service estimated that it expended about $118 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. The Forest Service estimated that it expended, for fiscal year 1997 through fiscal 2001, almost $106 million (in 2001 constant dollars) specifically for salmon- and steelhead-recovery efforts, as shown in table 15. Of the $106 million, more than $87 million was expended on such projects as watershed improvements, flood area restoration, burned-area emergency restoration, and land acquisition. The Forest Service also expended more than $131 million (adjusted to 2001 dollars) on changes to mission—related projects that benefited, but were not specifically directed at, salmon or steelhead. The Forest Service did not report providing nonfederal entities with any funds. The National Marine Fisheries Service estimated that it expended about $21 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. NMFS estimated that it expended, for fiscal year 1997 through fiscal 2001, approximately $49 million (in 2001 constant dollars) specifically for salmon- and steelhead-recovery efforts, as shown in table 16. Of this amount, almost $34 million was expended on consultation actions under the Endangered Species Act and for such research projects as the effects of hatchery operations on small wild salmon populations. NMFS estimated it also provided state and tribal groups with over $81 million (adjusted to 2001 dollars) from fiscal year 1997 through fiscal 2001. The states and tribal groups used these funds for many actions, including hatchery operations to mitigate the negative impacts on fish caused by the dams. Finally, NMFS expended another $6 million (adjusted to 2001 dollars) on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead. The Natural Resources Conservation Service (NRCS) estimated that it expended more than $3.6 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996, on actions in the Columbia River Basin to benefit salmon and steelhead. NRCS estimated that it expended, for fiscal year 1997 through fiscal 2001, approximately $8 million (in 2001 constant dollars) specifically for salmon and steelhead recovery efforts, as shown in table 17. Of the $8 million, almost $7 million was expended on such projects as salmon-recovery initiatives in the states of Idaho, Oregon, and Washington Conservation Technical Assistance to various soil conservation districts for salmon and steelhead recovery. NRCS estimated that it had no expenditures for research and monitoring. NRCS also expended more than $123 million (adjusted to 2001 dollars) on changes to mission-related projects that benefited fish but were not specifically directed at salmon or steelhead. NRCS officials stated that these expenditures assisted farmers, ranchers, and other private landowners to manage their natural resources in a sustainable manner without degradation while complying with federal, state, and local natural resources laws. Most of these expenditures provided cost-share funds to private landowners for installing and managing conservation practices through the Environmental Quality Incentives Program, Wetland Reserve Program, Wildlife Habitat Incentive Program, and Small Watershed Program. A portion of these funds was used by the agency to provide landowners with technical assistance to plan and implement these conservation programs. The U.S. Geological Survey (USGS) estimated that it expended more than $12 million (in unadjusted dollars) from fiscal year 1982 through fiscal 1996 on actions in the Columbia River Basin to benefit salmon and steelhead. USGS estimated that it expended, for fiscal year 1997 through fiscal 2001, over $19.5 million (in 2001 constant dollars) specifically for salmon- and steelhead-recovery efforts, as shown in table 18. Of the $19.5 million, more than $16 million was expended on such research projects as the genetic effects of hatchery fish introduction on the productivity of naturally spawning salmon, the significance of other salmon and steelhead predators, and the development of prey protection measures for juvenile salmon and steelhead in Columbia and Snake rivers reservoirs, and the behavior and survival of hatchery fall Chinook salmon after being released into the Snake River. Because USGS’s Western Fisheries Research Center is primarily a research facility, it did not report any project or monitoring expenditures. USGS also expended more than $3.3 million (adjusted to 2001 dollars) on changes to mission-related projects that benefited, but were not specifically directed at, salmon or steelhead. USGS did not report providing nonfederal entities with any funds. Each of the 11 federal agencies with significant responsibilities for salmon and steelhead recovery in the Columbia River Basin has taken many actions in the past 5 years to fulfill those responsibilities. Some actions were undertaken specifically to benefit fish while others were undertaken in pursuit of other agency mandates or programs. In both instances, a direct correlation between actions taken and the number of fish returning is not always clear and often takes years to materialize. Below, in alphabetical order, are examples of actions taken by each agency. The U.S. Army Corps of Engineers operates numerous hydroelectric dams in the Columbia River Basin. Each dam is authorized for specific purposes, such as flood control, navigation, power production, water supply, fish and wildlife, and recreation. The following examples illustrate actions the agency has taken to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin. Consulted with NMFS and FWS on the operation of FCRPS and other projects in the Columbia River Basin; developed in conjunction with the Federal Caucus, the All-H Strategy for restoring threatened and endangered salmon and steelhead; in conjunction with Bonneville and BOR, prepared 1-year and 5-year plans to implement the biological opinion on the Federal Columbia River Power System. Constructed juvenile bypass systems at seven of the eight mainstem dams to improve juvenile fish guidance and survival rates. For example, the juvenile bypass system at Bonneville Dam’s second powerhouse was expected to increase juvenile survival by 6 to 13 percent, depending on the species. Redesigned and/or rehabilitated fish ladders to improve passage efficiency. Constructed spillway deflectors at the John Day and Ice Harbor dams to allow higher spill flows and increase juvenile passage. Constructed new facilities and modified operations to enhance juvenile fish transportation. For example, the Corps improved or replaced the collecting and holding facilities at the four dams that collect juvenile fish, purchased two additional barges to transport juvenile fish, modified existing barges to provide better fish release systems, and extended the transport season on the Snake River. Rehabilitated turbines at Bonneville Dam’s first powerhouse, resulting in a 2 percent increase in juvenile fish survival. Constructed a monitoring facility at John Day Dam to obtain data on juvenile passage and other research needs. Installed a prototype surface bypass system at Lower Granite Dam and evaluated the effects of various configurations of behavioral guidance structures. Conducted a study to identify the characteristics of dissolved gases resulting from spills at Columbia River projects and to identify and evaluate alternatives for spillway modifications to reduce dissolved gas production to benefit fish passage while meeting water quality standards. Conducted juvenile and adult passage evaluation studies at eight dams on the Columbia and Snake rivers to help determine improvements in facilities and operations that may be necessary to increase spawning success. The Pacific Northwest Electric Power Planning and Conservation Act directs the Bonneville Power Administration to use its funding authorities to protect, mitigate, and enhance fish and wildlife affected by the construction and operation of the Federal Columbia River Power System. Primarily, Bonneville provides other agencies with funding to undertake actions to meet this goal. In doing so, Bonneville is to act consistently with the Northwest Power Planning Council’s fish and wildlife program while ensuring an adequate, economical, and reliable power supply. Examples of the actions that Bonneville has taken to benefit salmon and steelhead in the Columbia River Basin include the following: Provided federal, state, tribal and other entities with funding to protect and enhance fish and wildlife affected by hydropower development in the Columbia River Basin. Worked with other federal agencies to protect and rebuild species listed under the Endangered Species Act. In conjunction with the Federal Caucus, developed the All-H strategy for restoring threatened and endangered salmon and steelhead in the Columbia River Basin. Consulted with the National Marine Fisheries Service and the U.S. Fish and Wildlife Service on the operation of the Federal Columbia River Power System in the Columbia River Basin. In conjunction with the Corps of Engineers and Bureau of Reclamation, prepared a 1- and 5-year plan to implement the biological opinion on the Federal Columbia River Power System. Made fish protection the priority of FCRPS operations (except under flood control and power emergencies). Provided, on average, 7.2 million acre feet (50-water-year average) of flow augmentation annually (this equates to approximately 1.5 times the storage capacity of Grand Coulee Dam). Worked with the Corps and BOR to increase fish passage survival at dams, on average, by 5 percent or more at each dam. Funded predator control throughout FCRPS and the estuary to save approximately 7 million to 12 million juvenile salmon and steelhead per year. This equates to an approximate 5 to 10 percent increase in juvenile fish survival. Achieved, together with the Corps and BOR, on average, an in-river survival of juveniles through FCRPS that is now higher than ever measured. The Bureau of Indian Affairs is a trustee of fishing rights reserved by certain tribes in their treaties with the United States. As a party to the U.S. v. Oregon case, BIA plays a role in protecting, rebuilding, and enhancing upper Columbia River fish runs while providing harvests for both treaty Indian and non-Indian fisheries. The following examples illustrate actions the agency has taken to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin: Monitored actions of the Federal Caucus and others that affect tribal trust resources. Communicated its concerns regarding the All-H Strategy and other plans, including italics harvest negotiations and Mid- Columbia Habitat Conservation Plans. Provided the Columbia River Inter-Tribal Fish Commission with funding to, among other things, implement its recovery plan, conduct fishery enforcement, develop an Energy Vision report, implement certain aspects of the Pacific Salmon Treaty, and provide input on federal actions affecting salmon recovery, including the Bonneville Power Administration’s rate case. Provided individual tribes, including the Umatilla Tribe, the Yakama Indian Nation, the Warm Springs Tribe, the Nez Perce Tribe, the Colville Tribe, and the Shoshone-Bannock Tribes, with funding and actions performed by the tribes with these funds include the construction of hatchery and acclimation facilities and stream restoration. The Bureau of Land Management manages lands for multiple uses, including livestock grazing, recreation, mineral production, timber, and fish and wildlife. The following examples illustrate actions the agency has taken to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin: Acquired land for conservation purposes, including land at Fisherman’s Bend and on the Sandy River corridor. Performed road and trail maintenance, decommissioned roads, conducted culvert inventories, and replaced culverts to reduce erosion that can run off into streams. Performed habitat restoration and protection actions. Specific actions include planting 50 acres of riparian habitat on the lower Grande Ronde River, constructing 1 mile of cattle fencing and completing 3 acres of planting in the Grande Ronde Basin, improving in-stream habitat through the placement of boulders and large woody debris, rehabilitating areas burned by fire to reduce sedimentation, and reducing fuel loads to reduce the risk of future fires. Conducted several studies, including water quality, temperature, and flow monitoring on numerous streams in the basin; juvenile salmon and steelhead abundance and run timing in the Clackamas River; the effects of boulder placement on fish in streams in southwest Oregon; the effects of watershed disturbances on fish habitat; and an inventory of stream habitat. Prepared biological assessments to meet ESA consultation requirements. Coordinated with the Federal Energy Regulatory Commission during the relicensing of the Hells Canyon and Pelton/Round Butte projects. Increased staff of fishery biologists to address fish issues of land management actions. Provided the federal liaison and board member for the Willamette River Restoration Initiative, a pilot project under the Oregon State Salmon and Watershed Recovery Plan. Participates in the Interagency Implementation Team to implement the biological opinions for a federal land management conservation strategy for salmon and steelhead, commonly referred to as PACFISH. Participates in the Federal Caucus. Participates with private landowners, watershed councils, Native American tribes, and other partners in the development and implementation of restoration plans and projects. Bureau of Reclamation The Bureau of Reclamation operates numerous hydroelectric dams in the Columbia River Basin. Each dam may be authorized for specific purposes, including irrigation, power production, and recreation. The following examples illustrate actions the agency has taken to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin: Consulted with NMFS on the operation and maintenance of the Federal Columbia River Power System and 19 other BOR projects in the Columbia River Basin. In conjunction with requirements under the biological opinion, prepared and submitted annual and 5-year plans to NMFS and the U.S. Fish and Wildlife Service. Initiated the implementation on 61 of the 199 reasonable and prudent alternatives included in the biological opinion for the Federal Columbia River Power System that apply to BOR, including dam operations; water conservation; water quality; hatchery operations; tributary habitat improvements; and research, monitoring, and evaluation. Developed, in conjunction with the Federal Caucus, the All-H Strategy for restoring threatened and endangered salmon and steelhead. Worked with the Idaho legislature and local water masters in Idaho and Oregon to meet flow augmentation standards required by the 1995 biological opinion. Completed nine consultations for biological opinions and other purposes. Prepared Tributary Enhancement Water Conservation Demonstration Project reports for the Lemhi River Basin in Idaho and the Wallowa and John Day River basins in Oregon. Conducted studies on dissolved gas abatement and management at Grand Coulee Dam. Designed and built fish screens and fish passage facilities for irrigation diversions on authorized BOR projects. Provided federal and state agencies, tribes, irrigation districts, and watershed councils with technical assistance to replace or improve fish screens and fish ladders at diversions in the Lemhi River Basin in Idaho; in the Deschutes, John Day, Umatilla, Wallowa, and Willamette River basins in Oregon; and in the mid-Columbia, Okanogan, and Yakima basins in Washington. Initiated the Water Conservation Field Services Program to encourage the efficient use and conservation of water at federal reclamation projects. This program provides water districts and water users with technical and financial assistance and supports watershed partnerships to improve fish and wildlife habitat. Funded and worked with numerous Indian tribes, including the Nez Perce, Shoshone Bannock, Umatilla, Yakama, Warm Springs, Colville, Nisqually, Elwha, and Colville, to improve migration, water quality, and spawning and rearing habitat in support of treaty obligations. Under the Clean Water Act, the Environmental Protection Agency is authorized to establish water quality standards and to issue permits for the discharge of pollutants from a point source to navigable waters. The act also authorizes EPA to approve the total maximum daily load standards established by states. These standards determine the maximum amount of a pollutant that a water body can receive and still meet water quality standards for specified uses, including for fish and wildlife. The agency participated in the following actions to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin: Participated in developing the All-H Strategy to ensure that Endangered Species Act actions would be coordinated with ongoing and future water quality efforts in the Columbia River Basin. Negotiated an agreement with other federal agencies and the Council on Environmental Quality for the 2000 Federal Columbia River Power System’s biological opinion to efficiently integrate ESA and Clean Water Act implementation efforts. Worked closely with the Federal Caucus and the Federal Regional Executive Forums to provide a unified federal voice for Columbia River decisions. Developed a one-dimensional temperature model for the mainstem Columbia and Snake rivers that will provide a critical foundation for future implementation decisions. Using this model, EPA provided regional Columbia River managers with scientific and technical analysis to assist in critical decisions during the 2001 power emergency. The U.S. Fish and Wildlife Service operates and/or funds fish hatcheries. Funds for hatchery operations provided under the Mitchell Act are intended to mitigate for fish affected by the construction and operation of the Federal Columbia River Power System. FWS also conducts applied research and has responsibilities for other species under the ESA that require coordination with the National Marine Fisheries Service. The following examples illustrate actions the agency has taken to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin: Operated 12 National Fish Hatcheries and funded an additional 8 state hatcheries in the Columbia River Basin that produced over 32 million salmon and steelhead in fiscal year 2001. This represented about 50 percent of all salmon and steelhead released from hatcheries above Bonneville Dam. Helped to fund the compilation of research data on the status of Caspian Terns at known sites throughout the Pacific Northwest. This study will form a biological basis for future actions concerning Caspian Terns and their predation of juvenile salmon and steelhead. Developed a new technique to detect the presence of multiple fish pathogens from a single tissue sample, which will save considerable time and money in testing for fish diseases. As a part of the National Wild Fish Health Survey, surveyed wild salmon and steelhead in the basin to ascertain pathogen levels for disease. In conjunction with the Confederated Tribes of the Umatilla Indian Reservation and the Oregon Department of Fish and Wildlife, transferred about 350,000 spring Chinook salmon from a hatchery to the Umatilla River to increase local returns. Conducted spawning ground surveys and tracked the adult movement and habitat use of fall Chinook and Chum salmon below Bonneville Dam. This information was critical for determining dam operations during the 2001 drought. Initiated several fish-marking projects to support tribal efforts targeted at reintroducing hatchery stocks in areas where native stocks have been eliminated. Prepared and released a draft environmental impact statement on a proposal to provide upstream and downstream passage to salmon and steelhead in Icicle Creek. As part of the Washington State Ecosystem Conservation Program, restored and protected 7 miles and 28 acres of riparian habitat, restored 2 miles of in-stream habitat, removed eight barriers to fish migration, and replaced eight culverts with bridges. Provided technical assistance on numerous Federal Energy Regulatory Commission relicensing projects. As part of the Metro Greenspaces Program, completed eight conservation and restoration projects including the following: developing a strategic plan for a local land conservancy, enhancing 20 acres of riparian area, removing invasive species, and revegetating over 14 acres of land above streams. The U.S. Forest Service manages lands for multiple purposes, including outdoor recreation, range, timber, watershed, and wildlife and fish purposes. The following examples illustrate actions the agency has taken to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin: Developed a comprehensive Aquatic Conservation Strategy, a foundation for salmon and watershed restoration in 17 Columbia River Basin national forests. The strategy addressed land allocations, management direction, standards, guidelines, and monitoring designed to protect and restore fish and other aquatic resources. Implementing the strategy required close coordination with other federal agencies; tribal governments; state and local agencies; and a variety of local watershed councils, user groups, and conservation organizations. Improved more than 2,000 miles of stream banks and 9,000 acres of riparian area by using various methods, such as planting and placing logs in the streams to provide deeper pools. Decommissioned over 2,000 miles and stabilized 7,000 miles of road to reduce sedimentation runoff into nearby streams. Improved passage at barrier culverts. Under the Pacific Northwest Streams Initiative, acquired more than 50 miles (38,000 acres) of critical stream and riparian habitat for listed or at-risk fish stocks. Provided training sessions that are consistent with other federal, state and local agencies on fish habitat and watershed inventory, assessment, restoration, and monitoring methodologies and that are open to other agencies and the public. Assisted in the formation of, and provided technical and operational support for, watershed councils and groups in the states of Oregon and Washington. Created, in cooperation with other community partners, a variety of programs that study, inform, and monitor aquatic habitat, including school programs, self-guided interpretive exhibits, festivals, family fishing clinics, and technical assistance that reach over 100,000 people annually. Under the Endangered Species Act, the National Marine Fisheries Service is responsible for preparing a recovery plan and for consulting with other agencies on whether their planned actions will jeopardize listed salmon and steelhead populations. The following examples illustrate actions the agency has taken to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin. Listed nine populations of salmon and steelhead under the ESA and, pursuant to these and other listings, designated critical habitat for 19 populations and established a structure to conduct the recovery- planning process. Issued a final biological opinion on the operation of the Federal Columbia River Power System, the Corps’ juvenile fish transportation program, and 19 BOR projects. Issued or is developing biological opinions for (1) 15 categories of permits issued by the Corps, (2) relicensing the Hells Canyon Complex of nonfederal dams on the Snake River, (3) deepening the Columbia River shipping channel, (4) numerous programmatic actions on several National Forests and Bureau of Land Management districts, (5) hatchery operations, and (6) tribal and sport harvest of Columbia River steelhead. In conjunction with the Federal Caucus, developed the All-H Strategy for restoring listed salmon and steelhead. Engaged in extensive public outreach actions including conducting 17 workshops on ESA attended by 1,039 individuals, participating in 15 public meetings in five states to obtain comments on salmon recovery, and holding 18 hearings in four states to obtain comments on the draft ESA rules. Helped develop Habitat Conservation Plans, including a plan for 1.7 million acres of private timberlands in Idaho, Montana, and Washington and a plan for public utility districts’ operation of several dams on the Columbia River. Developed and tested an Internet-based system so applicants of the Corps’ permits can track their applications. Conducted studies and discussed management strategies with other agencies on factors affecting salmon mortality, such as predation by terns, seals, and sea lions; screening of water diversions; and the effects of drought and energy shortages on recovery strategies. The Natural Resources Conservation Service provides individual landowners with technical and financial assistance, conducts surveys, and supports conservation-planning efforts. NRCS’s assistance to private landowners has resulted in the following actions being taken to benefit salmon and steelhead in the Columbia River Basin in the past 5 years: Worked with 23,481 private individuals to develop resource management plans for 4,806,614 acres. Assisted with implementing these plans on 2,278,856 acres. Worked with private individuals to create or restore 10,566 acres of wetlands, treat 3,874,276 acres for erosion control, protect 327,902 feet of stream bank, create or improve 27,114 acres of riparian forest buffers, establish 45,732 acres of trees and shrubs, manage more effectively 1,237,384 acres for grazing, manage more effectively 1,075,351 acres for wildlife habitat, and manage more effectively 186,868 acres of irrigated land. The U.S. Geological Survey provides scientific information to assist other agencies in fulfilling their requirements under several acts, including the Pacific Northwest Electric Power Planning and Conservation Act, Economy Act, Clean Water Act, Northwest Forest Practices Act, and the National Environmental Policy Act. The following examples illustrate actions the agency has taken to meet its obligations and/or to benefit salmon and steelhead in the Columbia River Basin: Sponsored and organized the 11th Annual Smolt Workshop to share information. Prepared an annual report quantifying smolt predation by Northern Pikeminnows. Prepared an annual report comparing the experimental success of the progenies of hatchery and wild salmon in natural and hatchery environments. Prepared journal articles and reports on topics such as increased mortality to juvenile salmon, dietary and consumption patterns for juvenile salmon and steelhead, temperature-related movements of fall Chinook for 1998-99, identification of rearing habitats, and heavy metals present in foods of juvenile Chinook salmon and their potential effects. Estimated systemwide effects of mortality from predation. Evaluated the large-scale predator removal project. Developed data sets describing hatchery-rearing conditions, environmental factors, and migration performance for various hatcheries. Developed methods to detect bacterial and viral diseases in juvenile hatchery salmon. Issued a progress report on the use of estuarine habitats by juvenile salmon. Developed nonintrusive genetic markers for recognizing gender and stock in spring and fall-run Chinook. Conducted a week-long lecture and laboratory course for Department of the Interior resource managers in fish virology. Prepared a handbook for fish hatchery managers on chemical contaminants in hatchery food, and pathological symptoms. This appendix shows adult salmon and steelhead returns to the Columbia River Basin for the past 25 years as counted at two dams. Bonneville Dam is the first dam the adults must pass on the Columbia River, and Lower Granite Dam is the last dam they must pass on the Snake River before they can migrate into Idaho. The following are GAO’s comments on the Bonneville Power Administration’s letter dated June 10, 2002. 1. Bonneville commented that the report does not fully reflect its role in funding salmon- and steelhead-recovery efforts. For example, Bonneville stated that the report does not explain that it reimburses the U.S. Treasury for most of the expenditures for capital improvements at the Corps’ and BOR’s hydroelectric projects as well as operation and maintenance costs at these projects and at FWS’s Lower Snake River Compensation Plan hatcheries. We agree that Bonneville is a major supplier of salmon- and steelhead-recovery moneys, and clarifications were made in the report to reflect its role. However, we were not asked to provide information on the source of funds for salmon- and steelhead-recovery efforts but rather how much the agencies expended on such efforts. Therefore, the report reflects the funds Bonneville is referring to as expenditures by other federal agencies, such as the Corps, BOR, and FWS. 2. Bonneville also commented that the report does not fully describe that the funds it provides other agencies with are from ratepayer receipts and, as a result, much of the salmon- and steelhead-recovery expenditures shown in the report are paid for by those that buy the electric power the dams generate. While the report notes that ratepayer receipts fund these expenditures, we have added additional details on the source of the funds Bonneville uses to cover agencies expenditures and how Bonneville reimburses the U.S. Treasury for agencies expenditures for capital and operation and maintenance costs. 3. Bonneville expressed concern that we did not include the cost of replacement power and lost power revenues in our expenditure totals. We did not include these costs because they do not reflect expenditures for actual recovery actions and determining these costs is difficult to derive, since replacement power and lost revenues could result from other management decisions that are not related to salmon and steelhead recovery. The following are GAO’s comments on the letter dated July 2, 2002, from the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA). The National Marine Fisheries Service, the lead federal agency responsible for salmon and steelhead recovery in the Columbia River Basin, is an agency of NOAA. 1. We agree that there are many studies and documents that discuss various recovery actions and their effect on the survival rates of salmon and steelhead. However, these studies and documents generally do not quantify the affect. At best they estimate or approximate the effect of recovery efforts. For example, the Williams, Smith and Muir article, cited in NOAA’s comments, estimates the effect of engineering efforts on the survival rate of juvenile salmon and steelhead moving past the dams but does not quantify how many of these juveniles return as adults. The number of returning adults is important because other studies have shown that using bypass facilities increases salmon and steelhead mortality downstream. Hence, our point that there is little evidence to quantify the effects of recovery efforts on the number of returning salmon and steelhead is valid. We did, however, revise the report to include information on the estimated increased survival rates of salmon and steelhead passage at the dams. 2. The report recognizes that NMFS and others are developing and documenting recovery efforts. However, until these efforts are completed and results quantified, the full extent of recovery efforts will not be known. In addition, Jerry Aiken, Jill Berman, Jonathan Dent, Jaelith Hall-Rivera, Jonathan McMurray, and John Kalmar, Jr., made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Before 1850, an estimated 16 million salmon and steelhead returned to the Columbia River Basin annually to spawn. Over the past 25 years, the number of salmon and steelhead returning to the Columbia River Basin has averaged only 660,000 per year although annual population levels have varied widely. Factors such as over-harvesting, construction and operation of dams, degradation of spawning habitat, increased human population, and unfavorable weather and ocean conditions have contributed to the long-term decline. The population decline has resulted in the listing of 12 salmon and steelhead populations in the basin as threatened or endangered under the Endangered Species Act. Once a species is listed as threatened or endangered, the act requires that efforts be taken to allow its recovery. Eleven federal agencies are involved with salmon and steelhead recovery efforts in the Columbia River Basin. The National Marine Fisheries Service (NMFS), as the lead agency, is responsible for preparing a recovery plan and consulting with the other federal agencies on their planned actions. The 11 federal agencies estimate expenditures of $1.8 billion from fiscal year 1982 through fiscal year 1996 and $1.5 billion from fiscal year 1997 through fiscal year 2001 on efforts specifically designed to recover Columbia River Basin salmon and steelhead. In addition to the $1.5 billion, the 11 federal agencies estimated that they expended $302 million in the last five fiscal years on modifications to mission-related projects that benefited, but were not specifically directed at, salmon and steelhead, such as erosion control to improve crop productivity and wildlife habitat, which also improves stream flows and reduces sedimentation in spawning habitat. Although federal agencies have undertaken many types of recovery actions, there is little conclusive evidence to quantify the extent of their efforts on returning fish populations. Recovery actions taken include projects, such as constructing fish passage facilities at dams; research studies, such as determining the presence or absence of toxic substances that cause diseases in fish; monitoring actions, such as surveying spawning grounds; and other activities, such as consultations required by the act.
The National Flood Insurance Act of 1968 established NFIP as an alternative to providing direct disaster relief after floods. NFIP, which makes federally backed flood insurance available to residential property owners and businesses, was intended to reduce the government’s escalating costs for repairing flood damage. Floods are the most common and destructive natural disaster in the United States; however, homeowners’ insurance generally excludes flooding. Because of the catastrophic nature of flooding and the inability to adequately predict flood risks, private insurance companies historically have been largely unwilling to underwrite and bear the risk resulting from providing primary flood insurance coverage. Under NFIP, the federal government assumes the liability for the insurance coverage and sets rates and coverage limitations, while the private insurance industry sells the policies and administers the claims. NFIP offers two types of flood insurance premiums to property owners who live in participating communities: subsidized and full-risk. The National Flood Insurance Act of 1968 authorized NFIP to offer subsidized premiums to owners of certain properties. These subsidized rates are not based on flood risk and, according to FEMA, represent only about 40-45 percent of the full flood risk. Congress originally mandated the use of subsidized premiums to encourage communities to join the program and mitigate concerns that charging rates that fully and accurately reflected flood risk would be burdensome to some property owners. Even with highly discounted rates, subsidized premiums are, on average, higher than full-risk premiums. The premiums are higher because subsidized structures built before Flood Insurance Rate Maps (FIRM) became available generally are more prone to flooding (that is, riskier) than other structures. In general, pre-FIRM properties were not constructed according to the program’s building standards or were built without regard to base flood elevation—the level relative to mean sea level at which there is a 1 percent or greater chance of flooding in a given year. Potential policyholders can purchase flood insurance to cover both buildings and contents for residential and commercial properties. NFIP’s maximum coverage for residential policyholders is $250,000 for building property and $100,000 for contents. This coverage includes replacement value of the building and its foundation, electrical and plumbing systems, central air and heating, furnaces and water heater, and equipment considered part of the overall structure of the building. Personal property coverage includes clothing, furniture, and portable electronic equipment. For commercial policyholders, the maximum coverage is $500,000 per unit for buildings and $500,000 for contents (for items similar to those covered under residential policies). NFIP largely has relied on the private insurance industry to sell and service policies. In 1983, FEMA established the Write-Your-Own (WYO) program. Private insurers become WYOs by entering into an arrangement with FEMA to issue flood policies in their own name. WYOs adjust flood claims and settle, pay, and defend claims but assume no flood risk. Insurance agents from these companies are the main point of contact for most policyholders. WYOs issue policies, collect premiums, deduct an allowance for commission and operating expenses from the premiums, and remit the balance to NFIP. In most cases, insurance companies hire subcontractors—flood insurance vendors—to conduct some or all of the day-to-day processing and management of flood insurance policies. When flood losses occur, policyholders report them to their insurance agents, who notify the WYOs. The companies review the claims and process approved claims for payment. FEMA reimburses the WYOs for the amount of the claims plus expenses for adjusting and processing the claims, using rates that FEMA establishes. As of September 2012, about 85 WYOs accounted for about 85 percent of the more than 5.5 million policies in force. NFIP was added to GAO’s High-Risk List in 2006 due to losses from the 2005 hurricanes and the financial exposure the program created for the federal government. Until 2004, NFIP was able to cover most of its claims with premiums it collected and occasional loans from the U.S. Treasury (Treasury) that it repaid. However, after the 2005 hurricanes— primarily Hurricane Katrina—the program borrowed $16.8 billion from Treasury to cover the unprecedented number of claims. In prior work we found that NFIP, as it was then structured, was not likely to generate sufficient revenues to repay this amount. NFIP since has received additional borrowing authority in the amount of $9.7 billion to cover claims for Superstorm Sandy. As of July 31, 2013, the program owed Treasury approximately $24 billion. NFIP’s financial condition highlights structural weaknesses in program funding—primarily its rate structure. By design, NFIP does not operate for profit. Instead, the program must meet a public policy goal—to provide flood insurance in flood-prone areas to property owners who otherwise would not be able to obtain it. NFIP generally is expected to cover its claim payments and operating expenses with the premiums it collects. However, subsidized policies have been a financial burden on the program because of their relatively high losses and premium rates that are not actuarially based. As discussed previously, subsidized policies are associated with structures more prone to flood damage (either because of the way they were built or their location). As a result, the annual amount that NFIP collects in both full-risk and subsidized premiums is generally not enough to cover its operating costs, claim payments, and principal and interest payments to Treasury, especially in years of catastrophic flooding. This arrangement results in much of the financial risk of flooding being transferred to the federal government and ultimately the taxpayer. The Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act) addressed some of the structural challenges that have contributed to the program’s financial instability.policies will not receive subsidized premium rates, subsidies on existing For example, new flood insurance policies for many other properties will be phased out, and policies for properties that are remapped to a higher risk level will be subject to higher premium rates. In addition the Biggert-Waters Act requires FEMA to implement other changes to its rate-setting process, including building a reserve fund and updating maps used to set rates to reflect relevant information on topography, long-term erosion of shorelines, future changes in sea levels, and the intensity of hurricanes. While these changes may help increase NFIP’s long-term financial stability, the program still faces challenges in implementing the changes and their ultimate effect is not yet known. Furthermore, weaknesses in NFIP management and operations, including financial reporting processes and internal controls, strategic and human capital planning, and oversight of contractors, also have placed the program at risk. For example, in 2011 we found that FEMA had not developed goals, objectives, or performance measures for NFIP. In addition, FEMA faces challenges modernizing NFIP’s insurance policy and claims management system. As a result, we made recommendations to improve the effectiveness of FEMA’s planning and oversight efforts for NFIP; improve FEMA’s policies and procedures for achieving NFIP’s goals; and increase the usefulness and reliability of NFIP’s flood insurance policy and claims processing system. While FEMA agreed with our recommendations and has taken some steps to address them, continued attention to these issues is vital and additional steps are needed to address the concerns we have identified in the past. The Biggert-Waters Act mandates that GAO conduct a number of studies related to actual and potential changes to NFIP, including analyses of remaining subsidized properties, and the effect of increasing coverage limits or adding coverage options. In one of our studies responding to these mandates, of remaining subsidized properties, we estimated that with the changes in the Biggert-Waters Act approximately 438,000 policies are no longer eligible for subsidies, including about 345,000 nonprimary residential policies, about 87,000 business policies, and about 9,000 single-family, severe-repetitive-loss policies.the approximately 715,000 remaining subsidized policies are expected to be eliminated over time. Under the act, most remaining subsidized policies no longer would be eligible for subsidies if NFIP coverage lapsed or the properties were sold or substantially damaged. We estimated that with implementation of the provisions addressing sales and coverage lapses, the number of subsidized policies could decline by almost 14 percent per year. At that rate, the number of subsidized policies would be reduced by 50 percent in approximately 5 years. After about 14 years, fewer than 100,000 subsidized policies would remain. However, the actual outcomes and time required for subsidies to be reduced could vary depending on the behavior of policyholders and the actual rate of sales and coverage lapses. In terms of characteristics, we found that the geographic distribution of remaining subsidized policies was similar to the distribution of all NFIP policies. Other characteristics we analyzed— indicators of home value and owner income—were different for the policies that continue to be eligible for subsidized premium rates compared to those with full-risk rates. In particular, counties with higher home values and income levels tended to have larger percentages of remaining subsidized policies than policies with full-risk rates. In our July 2013 report on subsidized policies, we identified three broad options that could help address the financial impact of remaining subsidized policies on the program, but the advantages and disadvantages of each would need to be considered and action would be required from both Congress and FEMA. These options are not mutually exclusive and may be used together to reduce the financial impact of subsidized policies on NFIP. The way in which an option is implemented (such as more aggressively or gradually) also can produce different effects in terms of policy goals and thus change the advantages and disadvantages. Adjust the pace of eliminating subsidies. Accelerating the elimination of subsidies could improve NFIP’s financial stability by more quickly increasing the number of policies with premium rates that more accurately reflect the full risk of flooding, but could exacerbate the difficulty some policyholders may have in adjusting to new rates. In contrast, delaying the elimination of subsidized policies or lengthening the phase-in period would continue to expose the federal government to increased financial risk over a longer time. Moreover, delaying the elimination of subsidies would not represent a long-term fix for those policyholders who could not afford the new premium rates, whenever they came into effect. Target assistance for remaining subsidies. Assistance or a subsidy could be based on the financial need of the property owners, which could help ensure that only those policyholders needing the subsidy would have access to it and retain their coverage, with the rest paying full-risk rates. Targeting subsidies based on need—through a means test, for example—is an approach other federal programs use. However, NFIP does not currently collect the policyholder data required to assess need and determine eligibility and it could be difficult for FEMA to develop and administer such an assistance program in the midst of ongoing management challenges. Moreover, unlike other agencies that provide—and are allocated funds for— traditional subsidies, NFIP does not receive an appropriation to pay for shortfalls in collected premiums caused by its subsidized rates. One approach to maintain subsidies but improve NFIP’s financial stability would be to rate all policies at the full-risk rate and appropriate subsidies for eligible policyholders. Expand mitigation efforts such as elevation, relocation, and demolition of properties. This would include making mitigation mandatory to ensure that more homes were better protected. Mitigation efforts could be used to help reduce or eliminate the long-term risk of flood damage; especially if FEMA targeted the properties that were most costly to the program, such as those with repetitive losses. However, mitigation is expensive for NFIP, taxpayers, and communities. In our October 2008 study of NFIP’s rate-setting, we found that the losses generated by NFIP have created substantial financial exposure for the federal government and U.S. taxpayers—due in part to the program’s rate-setting process. We also found that FEMA’s rate-setting methods, even for full-risk rates, do not result in rates that accurately reflect flood risks. For example, FEMA’s rate-setting process does not fully take into account ongoing and planned development, long-term trends in erosion, or the effects of global climate change. Furthermore, FEMA sets rates on a nationwide basis, combining and averaging many topographic factors that are relevant to flood risks, and does not specifically account for these factors when setting rates for individual properties. Partly because of the rate-setting issues, in our July 2013 report on raising coverage limits or adding optional coverage types, we found that the advantages and disadvantages to making more changes to the program, such as these, would need to be carefully weighed. To determine the financial impact on NFIP of increasing coverage limits, we estimated the potential financial effect on NFIP if coverage limits had been raised in 2002–2011. Higher coverage limits would have been associated with increased net revenue in all fiscal years from 2002 through 2011, except for fiscal years 2004 and 2005 when the program experienced catastrophic losses. The overall results were the same when we conducted the analyses using variations in our assumptions to (1) decrease the premiums by 20 percent below the baseline estimate; (2) decrease the claims by 20 percent below the baseline estimate; and (3) estimate that only 25 percent, 50 percent, or 75 percent of all policyholders increased their coverage. Overall, the financial impact on the program of raising coverage limits would depend on the adequacy of the rates charged for the additional coverage. We also found that adding business interruption coverage to NFIP could be particularly challenging. For example, properly pricing risk, underwriting, and claim processing can be complex. Similarly, offering optional coverage for additional living expenses would have many of the same potential effects on NFIP, although this coverage generally is less complex to administer. In July 2013, we reported that FEMA will require several years to fully implement the Biggert-Waters Act and FEMA officials acknowledged that they have data limitations and other challenges to resolve before eliminating some subsidies as required in the act. The following points highlight some of the challenges we identified: The act eliminated subsidies for residential policies that covered nonprimary residences and business policies. FEMA has data on whether a policy covers a primary residence, but officials stated that the data may be outdated or incorrect. In addition, FEMA categorizes policies as residential and nonresidential rather than residential and business. As a result, FEMA does not have the information to identify nonresidential properties such as schools or churches that are not businesses and continue to be eligible for a subsidy. Beginning in October 2013, FEMA will require applicants for new policies and renewals to provide property status (residential or business). The act states that subsidies will be eliminated for policies that have received cumulative payment amounts for flood-related damage that equaled or exceeded the fair market value of the properties, and for policies that experience damage exceeding 50 percent of the fair market value of properties after enactment. Currently, FEMA is unable to make this determination as it does not maintain data on the fair market value of properties insured by subsidized policies. FEMA officials said that they have been in the process of identifying a data source. The act eliminates subsidies for severe repetitive loss policies and provides a definition of severe repetitive loss for single-family homes. However, it requires FEMA to define severe repetitive loss for multifamily properties and FEMA has not yet developed this definition. The act also requires FEMA to phase in full-risk rates on active policies that no longer are eligible for subsidies, but we found that FEMA generally lacks information needed to establish full-risk rates that reflect flood risk for the properties involved and also lacks a plan for proactively obtaining such information. Federal internal control standards state that agencies should identify and analyze risks associated with achieving program objectives, and use this information as a basis for developing a plan for mitigating the risks. In addition, these standards state that agencies should identify and obtain relevant and needed data to be able to meet program goals. However, in July 2013 we reported that FEMA does not have key information used in determining full-risk rates from all policyholders. According to FEMA officials, not all policyholders have elevation certificates, which document their property’s risk of flooding. Information about elevation is a key element in establishing premium rates on certain properties. Elevation certificates are required for some properties, but optional for others. According to FEMA officials, consistent with the act they are phasing in rate increases (of 25 percent per year) for policyholders who no longer are eligible for subsidies. The increase will continue until the rates reach a specific level or until policyholders supply an elevation certificate that indicates the property’s risk, allowing FEMA to determine the full-risk rate. Although subsidized policies have been identified as a risk to the program because of the financial drain they represent, FEMA does not have a plan to expeditiously and proactively obtain the information needed to set full- risk rates for all of them. Instead, FEMA will rely on certain policyholders to voluntarily obtain elevation certificates, which can be expensive for the property owner. Those at lower risk levels have an incentive to do so because they may then be eligible for lower rates. However, policyholders may not know their risk level, and policyholders with higher risk levels have a disincentive to voluntarily obtain an elevation certificate because they then could pay a higher premium. In our July 2013 report, we concluded that without a plan to expeditiously obtain property-level elevation information, FEMA will continue to lack basic information needed to accurately determine flood risk and continue to base full-risk rate increases for previously subsidized policies on limited estimates. As a result, FEMA’s phased-in rates for previously subsidized policies still may not reflect a property’s full risk of flooding; with some policyholders paying premiums that are below and others paying premiums that exceed full-risk rates. We recommended that FEMA develop and implement a plan, including a timeline, to obtain needed elevation information as soon as practicable. FEMA agreed with this recommendation and plans to evaluate the appropriate approach to obtain or require the submittal of this information. The Biggert-Waters Act also requires a number of other changes that the agency has been starting to implement. For example FEMA must adjust rates to accurately reflect the current risk of flood to properties when an area’s flood map is changed, subject to any other statutory provision in chapter 50 of Title 42 of the Unites States Code. 2013, FEMA has been determining how this provision would affect properties exempted from rate increases when they were remapped. 42 U.S.C. § 4015(e). agency deems appropriate) over a number of years beginning October 1, 2013. We continue to monitor the status of FEMA’s actions related to recommendations we have made in prior reports. In 2008, we recommended that FEMA develop a rate-setting methodology that uses data that results in full-risk premiums that accurately reflect the risk of losses from flooding. account the effects of long-term planned and ongoing development, including climate change. In response to our continued support of this recommendation as well as requirements in the Biggert-Waters Act, FEMA officials stated that they have made progress. For example, FEMA stated they already have revised damage calculations for flooding events that only reach the foundation of the structure, and performed a study to assess the long-term impacts of climate change. FEMA’s ongoing efforts include analyzing water-depth probability curves for the various zones and piloting studies to determine structure elevation and flood depths for various return periods. GAO-09-12. National Association of Insurance Commissioners (NAIC) and conducting other analyses to ensure that WYOs accurately report this information. However, FEMA officials stated that the agency cannot take action that completely addresses our recommendations until the WYOs reliably report to NAIC and that it might take several years before all companies consistently report such information. The agency also has been considering how to best introduce the WYOs’ actual flood-related expenses into payment formulas over the next several years, when FEMA expects to have more reliable financial information and less variation in reported expense ratios. In 2011, we recommended that FEMA improve strategic planning, performance management, and program oversight within and related to NFIP. FEMA agreed with our recommendations and has addressed some of them, such as strategic planning, but it still needs to continue to address the management and operational weaknesses we identified, including human capital planning, acquisition management, policy and claims management systems, financial management, collaboration, and records management. Unless these management issues are addressed, FEMA risks ongoing challenges in effectively and efficiently managing NFIP, including its management and use of data and technology. In conclusion, when we placed NFIP on the high-risk list in 2006, we noted that comprehensive reform likely would be needed to address the financial challenges facing the program. Since passage of the Biggert- Waters Act, FEMA is taking some important first steps toward implementing the reforms the act requires, but the extent to which the changes included in the act and FEMA’s implementation will reduce the financial exposure created by the program is not clear and the program’s long-term financial condition is not yet assured. In addition, our previous work has identified many of the necessary actions that FEMA should take to address a number of ongoing challenges in managing and administering the program. Getting NFIP on a sound footing, both financially and operationally, is important to achieving its goals and at the same time reducing its burden on the taxpayer. Chairman Merkley, Ranking Member Heller, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other staff who made key contributions to this testimony include Jill Naamane and Patrick Ward (Assistant Directors); Isidro Gomez; Karen Jarzynka-Hernandez; Barbara Roesmann; Rhonda Rose; and Jessica Sandler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
NFIP, established in 1968, provides policyholders with insurance coverage for flood damage. FEMA, within the Department of Homeland Security, is responsible for managing the program. NFIP offers two types of flood insurance premiums to property owners: subsidized and full-risk. The subsidized rates are not based on flood risk and, according to FEMA, represent only about 40-45 percent of the full flood risk. GAO placed NFIP on its high-risk list in 2006 because of concerns about its long-term solvency and related operational issues. GAO was asked to testify about NFIP issues and its recent work on NFIP. This statement discusses (1) the reasons that NFIP is considered high-risk, (2) changes to subsidized policies and implications of potential additional program changes, and (3) additional challenges for FEMA to address. In preparing this statement, GAO relied on its past work on NFIP, including GAO-13-607 , GAO-13-568 , and GAO-13-283 . The National Flood Insurance Program (NFIP) was added to GAO's high-risk list in 2006 and remains high risk due to losses incurred from the 2005 hurricanes and subsequent losses, the financial exposure the program represents for the federal government, and ongoing management and operational challenges. As of July 31, 2013, the program owed approximately $24 billion to the U.S. Treasury (Treasury). NFIP's financial condition highlights structural weaknesses in how the program has been funded--primarily its rate structure. The annual amount that NFIP collects in both full-risk and subsidized premiums is generally not enough to cover its operating costs, claim payments, and principal and interest payments for the debt owed to Treasury, especially in years of catastrophic flooding, such as 2005. This arrangement results in much of the financial risk of flooding being transferred to the federal government and ultimately the taxpayer. Furthermore, weaknesses in NFIP management and operations, including financial reporting processes and internal controls, strategic and human capital planning, and oversight of contractors have placed the program at risk. The Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act) mandated that GAO conduct a number of studies related to actual and potential changes to NFIP, including analyses of remaining subsidies and the effect of increasing coverage limits or adding coverage options. In a study of remaining subsidies, GAO estimated that with the changes in the Biggert-Waters Act approximately 438,000 policies no longer are eligible for subsidies, including about 345,000 policies for nonprimary residences, about 87,000 business policies, and about 9,000 policies for single-family properties that had severe-repetitive losses. Subsidies on most of the approximately 715,000 remaining subsidized policies are expected to be eliminated over time as properties are sold or coverage lapses, as are previous exemptions from rate increases after flood zone map revisions. Reducing the financial impact of remaining subsidized policies on NFIP generally could involve accelerating elimination of subsidies, targeting assistance for subsidies, or expanding mitigation efforts, or some combination. Each approach has advantages and disadvantages. In GAO's 2008 study about rate-setting, GAO noted that the losses generated by NFIP have created substantial financial exposure for the federal government and U.S. taxpayers--due in part to its rate-setting process. Partly because of these rate-setting issues, GAO concluded in a July 2013 report that the advantages and disadvantages to additional changes to the program, such as raising coverage limits or adding optional coverage types, would need to be carefully weighed. The Federal Emergency Management Agency (FEMA) will require several years to fully implement the Biggert-Waters Act. FEMA officials acknowledged that they have challenges to resolve. These include updating and correcting information on whether a policy is for a primary or secondary residence, determining the fair market value of insured properties, and developing a definition of severe repetitive loss for multifamily properties. Further, FEMA must establish full-risk rates that reflect flood risk for active policies that no longer are eligible for subsidies; but it does not have a plan to do so. In an effort to update payment formulas to insurance companies, as GAO recommended, FEMA has begun receiving actual flood-related information from some insurance companies but all companies are not reporting the information consistently. GAO continues to support its previous recommendations made to FEMA that focus on the need to address management and operational challenges, ensure that the methods and data used to set NFIP rates accurately reflect the risk of losses from flooding, and that oversight of NFIP and insurance companies responsible for selling and servicing flood policies is strengthened. FEMA agreed with these recommendations and is taking steps to address them.
Several mechanisms to facilitate coordination among FAA and partner agencies – including interagency committees, advisory boards, and working groups – are in place. First, the Senior Policy Committee, as the interagency governing body for NextGen, is meant to facilitate coordination and planning on NextGen across federal agencies. Chaired by the Secretary of Transportation, the Senior Policy Committee includes senior representatives from the NextGen partner agencies. Among its key activities, this committee works to provide policy guidance, resolve major policy issues, and identify and align resource needs. FAA and other partner agency officials indicated that the Senior Policy Committee has met infrequently. The Senior Policy Committee held their first full committee meeting under the new Administration in September 2009. According to the JPDO Director, JPDO is working closely with the Senior Policy Committee to establish a process for the committee to operate more effectively by providing it with the ability to review interagency dependencies such as FAA’s reliance on NASA research, develop a NextGen road map, and establish a set of high-level milestones—which it currently does not have—as well as conduct oversight of NextGen progress. In addition to the Senior Policy Committee, several other interagency coordination mechanisms are in place to facilitate coordination among FAA and partner agencies, many of which are within JPDO. These include the JPDO Board and the JPDO Division Directors Group, each of which is composed of representatives from other federal agencies and FAA. The JPDO Board functions as an adjunct to the Senior Policy Committee and includes representatives from each of the partner agencies. Representatives on the JPDO Board work on actionable outcomes related to NextGen. The Division Directors are responsible for the planning and managing of NextGen. JPDO also has organized nine working groups composed of representatives from federal agencies and industry stakeholders to specialize in developing NextGen’s key capabilities, along with recommendations and action plans to be integrated into NextGen planning. Continued industry participation in JPDO Working Groups – which is provided pro-bono – is a challenge given the current business climate and companies’ participation in numerous aviation forums. FAA and NASA also participate on four JPDO research transition teams that have been established to ensure that research and development needed for NextGen implementation is identified, conducted, and effectively transitioned to the implementing agency. In previous work, we discussed the formation of these teams, but as they had just been established, noted that their potential effectiveness was unclear. In that work we also identified key challenges in coordinating research, including gaps in funding for needed research and prioritization of research needs. According to the former Director of JPDO and NASA officials, the teams have been useful vehicles for identifying research needs and potential gaps; however, some teams are further along in terms of their involvement among the agencies and their deliverables than others. Although other agencies do not currently participate on these research transition teams, NASA agency officials reported that the structure could provide a model for future coordination across agencies. Other arenas where interagency coordination can take place also exist. For example, the NextGen Management Board, which will be chaired by FAA’s newly appointed Deputy Administrator and has representatives from all key FAA lines of business, addresses interagency collaboration on key issues such as maintaining the integrity of information shared through NextGen systems. A liaison from DOD sits on the NextGen Management Board. Our past work identified several leadership and organizational challenges in ensuring coordination across partner agencies. First, we have reported that while JPDO has been in place for several years, the office has experienced a high leadership turnover rate. In 2010, a new JPDO Director was appointed, the office’s fourth Director in its 7 years of existence. The lack of stable leadership has made it a challenge for JPDO to move forward on many goals and objectives. Second, in March of 2009, we reported that changes to JPDO’s organizational position placing it within ATO could be an impediment to partner agency coordination, as it created ambiguity about JPDO’s role and it lowered JPDO’s status in the eyes of stakeholders. Moreover, the creation of a staff to support the Senior Policy Committee resulting from a November 2008 Executive Order caused further confusion regarding roles and responsibilities relative to federal partner agencies. Third, with the ATO focused on implementing capabilities through the midterm, JPDO’s role was shifted to a focus on the long term beyond 2018. According to stakeholders and partner agency officials we interviewed for this work, given JPDO’s long-term focus, it has largely not been involved in ATO’s current near- and midterm activities, despite being placed organizationally within ATO. As a result, participation by the partner agencies in those activities is also limited. Agency officials stated that it is important for JPDO to be involved in near- and midterm activities as well as long-term planning to ensure that effective interagency coordination on NextGen is in place. Recent changes in the leadership and organizational position of JPDO are likely to change the nature of the relationship among JPDO, FAA, and its partner agencies and hold promise for increased coordination. JPDO has been elevated from its previous position within ATO and is now situated within FAA and outside of ATO, as illustrated in figure 1. The JPDO Director now reports directly to the Deputy FAA Administrator—who serves as the head of the NextGen Management Board—as well as serving as the Senior Advisor to the Secretary of Transportation. JPDO is also more closely aligned and is in a position to have a more active role with the Senior Policy Committee. This new structure removes the reporting relationship between JPDO and the Chief Operating Officer of ATO, and gives JPDO more visibility within the organization and with federal partners and other stakeholders. With these organizational moves, JPDO is expected to become a better conduit for monitoring cross-agency budgets and facilitating cross-agency collaborations and long-term research planning. Moreover, many of the key mechanisms for agency coordination, such as research transition teams, are within JPDO, and are likely to be affected by the move. According to the new Director of JPDO, a key step in improving the coordination with partner agencies will be to determine what value they see in the work produced by JPDO. As these changes have just recently occurred, it remains to be seen if the changes will result in better coordination across the partner agencies. In addition to these leadership and structural issues, stakeholders and representatives of the partner agencies identified other broad challenges that affect the extent to which some partner agencies have coordinated with others. These challenges include (1) limited funding and staffing to dedicate to NextGen activities, (2) competing mission priorities, and (3) undefined near-term roles and responsibilities of some partner agencies. Limited funding and staffing to dedicate to NextGen activities. Industry stakeholders and agency officials we spoke to stated that some partner agencies’ ability to coordinate with other agencies was affected by the levels of funding and staff that could be dedicated to NextGen activities. Officials at some partner agencies we spoke with stated that partner agencies allocated little or no budgetary funding specifically for NextGen activities and because of competing priorities for funds, they were limited in the resources they could dedicate to NextGen planning and coordination efforts. With respect to future investments, according to JPDO and DOT data, in fiscal year 2011, among NextGen partner agencies, three—FAA, NASA, and the Department of Commerce’s NOAA—requested some funding for NextGen activities. DOD and DHS did not request funding in their budgets specifically for NextGen activities. OSTP is working with the Office of Management and Budget to improve agency alignment and identification of NextGen-related budgets. Differences in agency mission. Differences among agencies’ mission priorities, particularly DHS’s and DOD’s, also pose a challenge to coordination efforts. DHS’s diverse set of mission priorities, ranging from aviation security to border protection, affects its level of involvement in NextGen activities. For example, events such as the 2009 Christmas Day terrorism attempt can shift DHS priorities quickly and move the agency away from focusing on issues such as NextGen, which are not as critical at that particular time. Agency officials also stated that although different departments within DHS are involved in related NextGen activities, such as security issues, the fact that NextGen implementation is not a formalized mission in DHS can affect DHS’s level of participation in NextGen activities. Industry stakeholders told us that there are potential consequences if DHS is not involved in long-term NextGen planning, including potentially marginalizing DHS’s NextGen areas, such as aviation security. Industry stakeholders reported that FAA could more effectively engage partner agencies in long-term planning by aligning implementation activities to agency mission priorities and by obtaining agency buy-in for actions required to transform the national airspace system. Undefined near-term roles and responsibilities of partner agencies. Some stakeholders and agency officials told us that FAA could do more to clearly define each partner agency’s role in key planning documents that guide NextGen implementation efforts, particularly in the near term. Our work has shown that coordinating agencies should work together to define and agree on their respective roles and responsibilities, including how the coordination effort will be led. We reported in 2008 that a key intended purpose of these planning documents, according to JPDO officials, is to provide the means for coordinating among the partner agencies and to identify each agency’s role in implementing NextGen capabilities, but that stakeholders said that the planning documents did not provide guidance for their organizational decision making. Some stakeholders and agency officials we spoke to more recently told us that the NextGen Implementation Plan, which identifies near- and midterm implementation efforts, still does not specify how partner agencies will be involved or what outcomes are required from them. Another industry stakeholder explained that if partner agencies do not see their roles reflected in key planning documents, projects which depend on inter- agency coordination will not be fully integrated across all partner agencies. One area in particular where coordination is important is related to how FAA, DOD, and DHS information networks will share information in the future to allow for a shared awareness of the national airspace. Information sharing across agencies is necessary for such things as advanced capabilities related to optimizing the use of certain airspace by the diverse set of users under the auspices of these agencies (e.g. military aircraft, commercial aircraft, general aviation, unmanned aerial vehicles, etc.). Protocols and requirements for inter-agency information sharing have yet to be determined. Limited agency participation in near-term coordination efforts, including establishing protocols on information sharing across agencies, could hamper coordination over the long term. Both the House and Senate FAA reauthorization bills include provisions for improving coordination among partner agencies that could address, in part, some of the challenges identified by industry stakeholders and agency officials. Some of the related provisions in the bills call for, among other things, revised memorandums of understanding with partner agencies that describe the respective responsibilities of each agency, including budgetary commitments. Stakeholders we spoke to cited challenges with coordinating the implementation of NextGen capabilities across FAA lines of business. With multiple FAA lines of business responsible for various NextGen activities, including offices within ATO and outside ATO, coordination and integration is vital since delays in actions required from several offices could prevent or delay full realization of NextGen benefits. Shifting from an organization and culture focused on system acquisition to one focused on integration and coordination will be an ongoing challenge for FAA. Recent organizational changes may help address these issues, but it is too early to measure the success of these efforts. As previously discussed and as shown in figure 1, changes that move JPDO out of the ATO and create a direct reporting relationship to the FAA Deputy Administrator solidify the FAA Deputy Administrator as the key executive in charge of NextGen. The FAA Deputy Administrator has authority over the different lines of business that must work together to implement NextGen and, as chairman of the NextGen Management Board, has the authority to force timely resolution of emerging NextGen implementation issues. Both the House and Senate reauthorization bills include provisions to designate a single official in charge of NextGen. The House bill proposes designating the Director of JPDO as the Associate Administrator for the Next Generation Air Transportation System, while the Senate bill proposes creating a Chief NextGen Officer who would oversee all NextGen programs and JPDO. Because the Deputy Administrator position has not yet been confirmed, it is too early to tell how effective these organizational relationships will be in addressing concerns from industry and the Congress regarding who is in charge of NextGen and whether that official has sufficient authority and accountability to ensure effective implementation. Because these changes have just occurred, it is not yet clear whether they will be sufficient to address the problems cited by the Task Force. authority over activities across FAA, or that suitable oversight mechanisms exist in order to ensure timely implementation of all activities necessary for an operational improvement. As a result, these issues could slow the implementation of NextGen. FAA officials and several stakeholders we interviewed described FAA’s near- and midterm efforts as necessary stepping-stones to the long-term plans and vision for NextGen. Early success in implementing key NextGen capabilities desired by aircraft operators will help build confidence among operators that FAA can and will provide the operational improvements necessary for operators to realize benefits from their equipment investments. From a planning perspective, integration of near- and midterm implementation plans with the long-term plans and vision for NextGen is currently an ongoing effort within the FAA. As previously mentioned, near- and midterm implementation is guided by the 2010 NextGen Implementation Plan, which feeds into FAA’s Enterprise Architecture for the national airspace system. Supporting the NextGen Implementation plan are two more detailed plans - Segment A, which defines detailed activities through 2015, to be completed later this quarter, which will then be followed by Segment B, which defines NextGen through 2018. These plans will identify in great detail the specific actions that must take place in order to implement the identified capabilities. The long-term vision and initial planning for NextGen took place within JPDO and resulted in the overall Concept of Operations, the NextGen Enterprise Architecture, and an accompanying Integrated Work Plan (IWP). The IWP sought to identify all of the envisioned NextGen capabilities through the long term and also lays out the enabling activities believed necessary to achieve those capabilities (e.g., necessary research and development, policy development, and so forth). Currently, according to a senior FAA official, the operational improvements identified in the 2010 NextGen Implementation Plan and FAA’s Enterprise Architecture have been aligned with the operational improvements identified in the NextGen Enterprise Architecture and the IWP. However, the enabling activities necessary to achieve those capabilities have yet to be fully aligned. Various ATO offices and JPDO are currently developing agreements that will set forth how the offices will work together to fully align all of the enabling activities across the various planning documents. The effort to align the rest of the enabling activities is expected to be completed in late fiscal year 2010, according to a senior FAA official. Some stakeholders expressed concern that near- and midterm programs and capabilities are not connected well enough to the long-term vision and identified several key policy decisions that will affect the vision of the NextGen system and thus will determine whether programs, technologies, and capabilities implemented today will be the stepping-stones to future, more advanced capabilities. Three of these decisions that will have a major impact on the direction of near- and midterm implementation efforts as well as the long-term vision involve issues such as the scope and timing of installing necessary equipment on aircraft, expediting environmental reviews, and the extent to which additional airport capacity will be needed. Equipping aircraft. FAA has yet to develop a strategy for the timing, cost, and scope of equipping the nation’s aircraft fleet. In particular, FAA must focus on delivering near-term operational benefits by completing activities, such as procedure development, airspace redesign, performance standard development, and separation standard reduction, that lay the foundation for NextGen. Doing so will help provide incentives for users, especially commercial airlines, to invest in equipment for their aircraft. Two key decisions that must be considered are whether all aircraft need to be equipped at all locations and when equipping with various technologies should occur. FAA must align aircraft equipping rules and incentives in a way that minimizes the costs and maximizes the overall benefits of NextGen. We have previously reported that, in some cases, the federal government may deem financial or other incentives desirable to speed the deployment of new equipment and that appropriate incentives will depend on the technology and the potential for an adequate and timely return on public and private investment. Environmental approach. FAA has yet to make decisions regarding how environmental reviews can be expedited and what strategies might be needed to meet national environmental targets. We previously reported that differing levels of review must be completed depending on the extent FAA deems its actions to have significant environmental impact, and that the more extensive the analysis required, the longer the process can take, which can thus affect implementation of NextGen capabilities. A key question in this regard is how to appropriately and expeditiously review actions that may increase noise in some areas but also reduce emissions and reduce noise levels overall. Further, a balance will need to be struck between needs for increased capacity, which means more aircraft will be flying and releasing emissions, and potential environmental targets in the future. A key issue here is that although NextGen will increase the efficiency per flight (fuel burn, distance traveled, and emissions), because there are expected to be more total flights, greenhouse gas emissions in total may rise. Airport capacity. A national policy regarding airport capacity in key metropolitan areas will need to be determined. Even with current planned airport expansion, FAA expects capacity shortfalls in many of the nation’s busiest airports. NextGen alone is not likely to sufficiently expand the safety and capacity of the national airspace system. Decisions regarding using existing capacity more efficiently include certifying and approving standards for the use of closely spaced parallel runways—which will be a major driver of the amount of land needed to expand airport capacity and will determine capacity in some metropolitan areas—and developing policies that address situations when demand exceeds capacity at airports or in specific airspace (e.g., pricing, administrative rules, service priorities, and so forth). Furthermore, planning infrastructure projects to increase capacity, such as building additional runways, can take as long as a decade or more, and will require substantial planning and safety and cost analyses. JPDO and MITRE are currently conducting modeling work to examine benefits, costs, and risks associated with alternative assumptions regarding various future scenarios. This work will provide important information to stakeholders and decision makers regarding the validation of the benefits of NextGen capabilities, as well as the extent to which further capacity in the system may be required, and is still in the preliminary stages. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or members of the subcommittee may have at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or dillinghamg@gao.gov. Individuals making key contributions to this testimony include Andrew Von Ah (Assistant Director), Kieran McCarthy, Richard Scott, Maria Mercado, Kevin Egan, Dominic Nadarski, Delwen Jones, Amy Abramowitz, and Bert Japikse. Next Generation Air Transportation System: FAA Faces Challenges in Responding to Task Force Recommendations. GAO-10-188T. Washington, D.C.: October 28, 2009. Responses to Questions for the Record: March 18, 2009, Hearing on ATC Modernization: Near-Term Achievable Goals. GAO-09-718R. Washington, D.C.: May 20, 2009. Next Generation Air Transportation System: Status of Transformation and Issues Associated with Midterm Implementation of Capabilities. GAO-09-479T. Washington D.C.: March 18, 2009. Responses to Questions for the Record: February 11, 2009, Hearing on the FAA Reauthorization Act of 2009. GAO-09-467R. Washington, D.C.: March 10, 2009. Next Generation Air Transportation System: Status of Systems Acquisition and the Transition to the Next Generation Air Transportation System. GAO-08-1078. Washington, D.C.: September 11, 2008. Next Generation Air Transportation System: Status of Key Issues Associated with the Transition to NextGen. GAO-08-1154T. Washington, D.C.: September 11, 2008. Joint Planning and Development Office: Progress and Key Issues in Planning the Transition to the Next Generation Air Transportation System. GAO-07-693T. Washington, D.C.: March 2007. Next Generation Air Transportation System: Progress and Challenges Associated with the Transformation of the National Airspace System. GAO-07-25. Washington, D.C.: November 13, 2006. Results Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
To prepare for future air traffic growth, the Federal Aviation Administration (FAA), including its Joint Planning and Development Office (JPDO) and Air Traffic Organization, is planning and implementing the Next Generation Air Transportation System (NextGen) in partnership with other federal agencies, such as the Departments of Commerce, Defense, and Homeland Security, and the aviation industry. NextGen will transform the current radar-based air traffic control system into a satellite-based system. As FAA begins implementing near-and midterm NextGen capabilities, a key challenge will be the extent to which FAA is able to integrate near and midterm improvements (those between 2012 and 2018) with long-term plans (beyond 2018). Furthermore, coordination among federal partner agencies and among various lines of business within FAA is important to ensure that NextGen implementation efforts are aligned. GAO's testimony focuses on (1) current mechanisms for and challenges to coordination among FAA and its partner agencies in implementing NextGen, (2) challenges and ongoing efforts to improve coordination across offices within FAA, and (3) issues related to integrating near- and midterm implementation plans with long-term NextGen plans. This statement is based on past and ongoing GAO work, and interviews GAO conducted with senior agency officials at FAA, JPDO and its partner agencies, and selected industry stakeholders. Several mechanisms to facilitate coordination on NextGen activities among partner agencies and across FAA exist, but challenges to this coordination remain. One interagency coordination mechanism is the Senior Policy Committee, which is the high-level coordinating body across all of the partner agencies. In addition, JPDO is tasked with facilitating day-to-day interagency coordination, and has several mechanisms, including working groups and research transition teams, to accomplish this. GAO has previously reported that a lack of stable leadership and ambiguity surrounding JPDO's organizational position and ongoing role have contributed to the uneven performance of its coordination mechanisms. Recent changes in both the leadership and organizational position of JPDO could improve coordination across partner agencies. Stakeholders and partner agencies identified several other challenges to improving interagency coordination and collaboration, including (1) limited funding and staffing to dedicate to NextGen activities, (2) competing mission priorities, and (3) undefined near-term roles and responsibilities of some partner agencies. FAA also faces challenges coordinating the implementation of NextGen across multiple FAA offices. GAO has previously reported that shifting from an organization focused on system acquisition to one focused on integration and coordination will be an ongoing challenge for FAA. Recent organizational changes that solidify the FAA Deputy Administrator as the key executive in charge of NextGen may help address these challenges. Moreover, FAA has made progress in improving coordination of efforts within FAA, by coordinating some office functions and moving toward a portfolio approach for implementation. However, as all these changes have recently occurred, it is too early to measure their success. Integration of midterm implementation plans with the long-term plans and vision for NextGen is currently an ongoing effort within FAA. FAA officials and several stakeholders described FAA's near- and midterm efforts--such as implementing satellite-based surveillance of aircraft--as necessary stepping-stones to the long-term plans and vision of NextGen--such as aircraft operators receiving satellite surveillance information in the cockpit and using it to self-separate from surrounding aircraft. Early success in implementing NextGen capabilities will help build confidence among aircraft operators that FAA can and will provide the operational improvements necessary for operators to realize benefits from their equipment investments. However, some stakeholders expressed concern that near- and midterm implementation efforts are not integrated well enough with the long-term vision. Stakeholders identified key policy decisions that will affect the vision of the NextGen system over the long term and in turn determine whether programs, technologies, and capabilities implemented today will be the stepping-stones to future, more advanced capabilities. Key decisions include such issues as the installation of aircraft equipment, expediting environmental reviews, and the extent to which additional airport capacity will be needed.
Addressing the Year 2000 problem in time will be a tremendous challenge for the federal government. Many of the federal government’s computer systems were originally designed and developed 20 to 25 years ago, are poorly documented, and use a wide variety of computer languages, many of which are obsolete. Some applications include thousands, tens of thousands, or even millions of lines of code, each of which must be examined for date-format problems. To complicate matters, agencies must also consider the computer systems belonging to federal, state, and local governments; the private sector; foreign countries; and international organizations that interface with their systems. For example, agencies that administer key federal benefits payment programs, such as the Department of Veterans Affairs, exchange data with the Department of the Treasury, which, in turn, interfaces with various financial institutions to ensure that benefits checks are issued. Department of Defense (DOD) systems interface with thousands of systems belonging to foreign military sales customers, private contractors, other federal agencies, and international entities such as the North Atlantic Treaty Organization. Taxpayers can pay their taxes through data exchanges between the taxpayer, financial institutions, the Federal Reserve System, and the Department of the Treasury’s Financial Management Service and the Internal Revenue Service. Because of these and thousands of other interdependencies, government systems are also vulnerable to failure caused by incorrectly formatted data provided by other systems that are noncompliant. The federal government also depends on the telecommunications infrastructure to deliver a wide range of services. For example, the route of an electronic Medicare payment may traverse several networks—those operated by the Department of Health and Human Services, the Department of the Treasury’s computer systems and networks, and the Federal Reserve’s Fedwire electronic funds transfer system. Seamless connectivity among a wide range of networks and carriers is essential nationally and internationally and a Year 2000-induced telecommunications failure could cause major disruptions. In addition, the year 2000 could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years and contain embedded computer systems to control, monitor, or assist in operations. For example, building security systems, elevators, and air conditioning and heating equipment could malfunction or cease to operate. Agencies cannot afford to neglect any of these issues. If they do, the impact of Year 2000 failures could be widespread, costly, and potentially disruptive to vital government operations worldwide. For example: flights could be grounded or delayed and airline safety could be degraded; the military services could find it extremely difficult to efficiently and effectively equip and sustain their forces around the world; Internal Revenue Service tax systems could be unable to process returns, thereby jeopardizing revenue collection and delaying refunds; the Social Security Administration process to provide benefits to disabled persons could be disrupted; and payments to veterans with service-connected disabilities could be erroneous or severely delayed. Because of the urgent nature of the Year 2000 problem and the potentially devastating impact it can have on critical government operations, we designated the problem as a high-risk area for the federal government in February 1997. Since that time, we have issued over 40 reports and testimony statements detailing specific findings and recommendations related to the Year 2000 readiness of a wide range of federal agencies. We have also issued guidance to help organizations successfully address the issue. Overall, the government’s 24 major departments and agencies are making slow progress in fixing their systems. In May 1997, the Office of Management and Budget (OMB) reported that about 21 percent of the mission-critical systems (1,598 of 7,649) for these departments and agencies were Year 2000 compliant. A year later, in May 1998, these departments and agencies reported that 2,914 of the 7,336 mission-critical systems in their current inventories, or about 40 percent, were compliant. Unless progress improves dramatically, a substantial number of mission-critical systems will not be compliant on time. In addition to slow progress in fixing systems, many agencies were not adequately acting on critical steps to establish priorities, solidify data exchange agreements, and develop contingency plans. Likewise, more attention needs to be devoted to (1) ensuring the government has a complete and accurate picture of Year 2000 progress, (2) setting national priorities, (3) ensuring that the government’s critical core business processes are adequately tested, (4) recruiting and retaining information technology personnel with the appropriate skills for Year 2000-related work, and (5) assessing the nation’s Year 2000 risks, including those posed by key economic sectors. I would like to highlight some of these vulnerabilities and our recommendations made in April 1998 for addressing them. First, governmentwide priorities in fixing systems have yet to be established. There has not been a concerted effort to set governmentwide priorities based on such criteria as the potential for adverse health and safety effects, adverse financial effects on American citizens, detrimental effects on national security, and adverse economic consequences. Furthermore, while individual agencies have been identifying mission-critical systems, this has not always been done based on a determination of the agency’s most critical operations. For example, as noted by the Defense Science Board, Defense has no means of distinguishing between the priority of a video-conferencing system and a logistics system, both of which were identified as mission-critical. If priorities are not clearly set, the government may well end up wasting limited time and resources in fixing systems that have little bearing on the most vital government operations. Second, contingency planning across the government has been inadequate. In their May 1998 quarterly reports to OMB, only four agencies reported that they had drafted contingency plans for their core business processes. Without such plans, when unpredicted failures occur, agencies will not have well-defined responses and may not have enough time to develop and test alternatives. Federal agencies depend on data provided by their business partners as well as services provided by the public infrastructure (e.g., power, water, transportation, and voice and data telecommunications). One weak link anywhere in the chain of critical dependencies can cause major disruptions to business operations. Given these interdependencies, it is imperative that contingency plans be developed for all critical core business processes and supporting systems, regardless of whether these systems are owned by the agency. Third, OMB’s assessment of the current status of federal Year 2000 progress is predominantly based on agency reports that have not been consistently reviewed or verified. Without independent reviews, OMB and the President’s Council on Year 2000 Conversion have little assurance that they are receiving accurate information. In fact, we have found cases in which agencies’ systems compliance status reported to OMB has been inaccurate. For example, the DOD Inspector General estimated that almost three quarters of DOD’s mission-critical systems reported as compliant in November 1997 had not been certified as compliant by DOD components.In May 1998, the Department of Agriculture reported 15 systems as compliant, even though these were replacement systems that were still under development or were planned to be developed. (The department plans to remove these systems from compliant status in its next quarterly report.) Fourth, end-to-end testing responsibilities have not yet been defined. To ensure that their mission-critical systems can reliably exchange data with other systems and that they are protected from errors that can be introduced by external systems, agencies must perform end-to-end testing for their critical core business processes. The purpose of end-to-end testing is to verify that a defined set of interrelated systems, which collectively support an organizational core business area or function, work as intended in an operational environment. In the case of the year 2000, many systems in the end-to-end chain will have been modified or replaced. As a result, the scope and complexity of testing—and its importance—is dramatically increased, as is the difficulty of isolating, identifying, and correcting problems. Consequently, agencies must work early and continually with their data exchange partners to plan and execute effective end-to-end tests. So far, lead agencies have not been designated to take responsibility for ensuring that end-to-end testing of processes and supporting systems is performed across boundaries, and that independent verification and validation of such testing is ensured. In our April 1998 report on governmentwide Year 2000 progress, we made a number of recommendations to the Chairman of the President’s Council on Year 2000 Conversion aimed at addressing these problems. These included establishing governmentwide priorities and ensuring that agencies set their own agencywide priorities, developing a comprehensive picture of the nation’s Year 2000 readiness, requiring agencies to develop contingency plans for all critical core requiring agencies to develop an independent verification strategy to involve inspector general or other independent organizations in reviewing Year 2000 progress, and designating lead agencies responsible for ensuring end-to-end operational testing of processes and supporting systems is performed. We are encouraged by actions the Council is taking in response to some of our recommendations. For example, OMB and the Chief Information Officers Council adopted our draft guide providing information on business continuity and contingency planning issues common to most large enterprises as a model for federal agencies. However, as we recently testified before this Subcommittee, some actions have not been initiated—principally with respect to setting national priorities, independent verification, and end-to-end testing. One of the more alarming problems we have come across in our Year 2000 reviews is that some agencies are not adequately prepared for testing their systems for Year 2000 compliance. For example, in April 1998, we reported that DOD did not have a testing strategy that specifies uniform criteria and processes that its components should use in testing their systems. The Army, Navy, and Air Force had not assessed their test needs or test facility requirements. In May 1998, we reported that the Department of Agriculture’s Chief Information Officer had not provided test guidance to the department’s component agencies, and 8 of 10 component agencies included in our review lacked testing strategies. The fact that these agencies are not prepared now for effective testing raises serious concern. Complete and thorough Year 2000 testing is essential to provide reasonable assurance that new or modified systems process dates correctly and will not jeopardize an organization’s ability to perform core business operations after the millennium. Moreover, since the Year 2000 computing problem is so pervasive, potentially affecting an organization’s systems software, applications software, databases, hardware, firmware and embedded processors, telecommunications, and external interfaces, the requisite testing is extensive and expensive. Leading organizations estimate that testing will require at least 50 percent of an entity’s total Year 2000 program time. To address this problem, we are issuing today a new installment of our Year 2000 guidance which addresses the need to plan and conduct Year 2000 tests in a structured and disciplined fashion. The guide describes a step-by-step framework for managing, and a checklist for assessing, all Year 2000 testing activities, including those activities associated with computer systems or system components (such as embedded processors) that are vendor supported. This disciplined approach and the prescribed levels of testing activities are hallmarks of mature software and system development/acquisition and maintenance processes. The guide describes the five levels of Year 2000 testing activities. The first level establishes the organization infrastructure key processes needed to guide, support, and manage the next four levels of testing activities. For example, it addresses defining and assigning Year 2000 test management authority and responsibility, defining criteria for certifying a system as compliant, identifying and allocating resources, establishing schedules, and securing test facilities. The next four levels provide key processes for effectively designing, conducting, and reporting on tests of incrementally larger system components: software unit/module tests, software integration tests, system acceptance tests, and end-to-end tests. The processes focus on testing of software and system components that the organization is directly responsible for developing, acquiring, or maintaining. Key processes, however, are also defined to address organizational responsibilities relative to testing of vendor-supported and commercial, off-the-shelf (COTS) products and components (including hardware, systems software, embedded processors, telecommunications, and COTS applications). The test model builds upon and complements the five-phase conversion model described in our Year 2000 readiness guide. The five levels of test activities span all phases of our Year 2000 conversion model, with the preponderance of test activities occurring in the conversion model’s renovation and validation phases. Finally, the guide incorporates guidance and recommendations of standards bodies, such as the National Institute of Standards and Technology and the Institute of Electrical and Electronic Engineers on Year 2000 testing practices and draws on the work of leading information technology organizations including the Software Engineering Institute, Software Quality Engineering, Software Productivity Consortium, and the United Kingdom’s Central Computer and Telecommunications Agency. In conclusion, if effectively implemented, our guide should help federal agencies successfully negotiate the complexities involved with the Year 2000 testing process. However, the success of the government’s Year 2000 remediation efforts ultimately hinges on setting governmentwide priorities; ensuring that agencies set priorities and develop contingency plans consistent with these priorities; developing an accurate picture of remediation progress; designating lead agencies for end-to-end testing efforts; and addressing other critical issues, such as recruiting and retaining qualified information technology personnel. Mr. Chairman, this concludes my statement. Mr. Joel Willemssen, GAO’s Issue Area Director for Civil Agencies Information Systems and our focal point for Year 2000 work, has accompanied me today. We will be happy to answer any questions you or Members of the Subcommittee may have. Year 2000 Computing Crisis: Telecommunications Readiness Critical, Yet Overall Status Largely Unknown (GAO/T-AIMD-98-212, June 16, 1998). GAO Views on Year 2000 Testing Metrics (GAO/AIMD-98-217R, June 16, 1998). IRS’ Year 2000 Efforts: Business Continuity Planning Needed for Potential Year 2000 System Failures (GAO/GGD-98-138, June 15, 1998). Year 2000 Computing Crisis: Actions Must Be Taken Now To Address Slow Pace of Federal Progress (GAO/T-AIMD-98-205, June 10, 1998). Defense Computers: Army Needs to Greatly Strengthen Its Year 2000 Program (GAO/AIMD-98-53, May 29, 1998). Year 2000 Computing Crisis: USDA Faces Tremendous Challenges in Ensuring That Vital Public Services Are Not Disrupted (GAO/T-AIMD-98-167, May 14, 1998). Securities Pricing: Actions Needed for Conversion to Decimals (GAO/T-GGD-98-121, May 8, 1998). Year 2000 Computing Crisis: Continuing Risks of Disruption to Social Security, Medicare, and Treasury Programs (GAO/T-AIMD-98-161, May 7, 1998). IRS’ Year 2000 Efforts: Status and Risks (GAO/T-GGD-98-123, May 7, 1998). Air Traffic Control: FAA Plans to Replace Its Host Computer System Because Future Availability Cannot Be Assured (GAO/AIMD-98-138R, May 1, 1998). Year 2000 Computing Crisis: Potential For Widespread Disruption Calls For Strong Leadership and Partnerships (GAO/AIMD-98-85, April 30, 1998). Defense Computers: Year 2000 Computer Problems Threaten DOD Operations (GAO/AIMD-98-72, April 30, 1998). Department of the Interior: Year 2000 Computing Crisis Presents Risk of Disruption to Key Operations (GAO/T-AIMD-98-149, April 22, 1998). Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, Exposure Draft, March 1998). Tax Administration: IRS’ Fiscal Year 1999 Budget Request and Fiscal Year 1998 Filing Season (GAO/T-GGD/AIMD-98-114, March 31, 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Federal Regulatory Efforts to Ensure Financial Institution Systems Are Year 2000 Compliant (GAO/T-AIMD-98-116, March 24, 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time Is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computer Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed: (1) the year 2000 risks facing the government; (2) major concerns with the government's progress in fixing its systems; and (3) guidance on year 2000 testing, which is designed to assist agencies in the most extensive and expensive part of remediation. GAO noted that: (1) addressing the year 2000 problem in time will be a tremendous challenge for the federal government; (2) to complicate matters, agencies must consider the computer systems belonging to federal, state, and local governments; the private sector; foreign countries; and international organizations that interface with their systems; (3) the year 2000 could cause problems for the many facilities used by the federal government that were built or renovated within the last 20 years and contain embedded computer systems to control, monitor, or assist in operations; (4) if agencies neglect any of these issues, the impact of year 2000 failures could be widespread, costly and potentially disruptive to vital government operations worldwide; (5) overall, the government's 24 major departments and agencies are making slow progress in fixing their systems; (6) many agencies were not adequately acting on critical steps to establish priorities, solidify data exchange agreements, and develop contingency plans; (7) some agencies are not adequately prepared for testing their systems for year 2000 compliance; (8) complete and thorough year 2000 testing is essential to provide reasonable assurance that new or modified systems process dates correctly and will not jeopardize an organization's ability to perform core business operations after the millenium; (9) since the year 2000 computing problem is so pervasive, the requisite testing is extensive and expensive; (10) to address the testing problem, GAO issued a new installment of its year 2000 guidance which addresses the need to plan and conduct year 2000 tests in a structured and disciplined fashion; (11) if effectively implemented, the guide should help federal agencies successfully negotiate the complexities involved with the year 2000 testing process; and (12) however, the success of the government's year 2000 remediation efforts ultimately hinges on setting governmentwide priorities, ensuring that agencies set priorities and develop contingency plans consistent with these priorities, developing an accurate picture of remediation progress, designating lead agencies for end-to-end testing efforts, and addressing other critical issues such as recruiting and retaining qualified information technology personnel.
In 1867, Congress enacted legislation that allowed the government to pay awards to individuals who provided information that aided in detecting and punishing those guilty of violating tax laws. Initially, Congress appropriated funds to pay these awards at the government’s discretion. In 1996, Congress increased the scope of the program to also provide awards for detecting underpayments of tax and changed the source of awards to money IRS collects as a result of information whistleblowers provide. The Tax Relief and Health Care Act of 2006 created an expanded whistleblower award program to complement the existing whistleblower program. We refer to the original program as the 7623(a) program and the expanded program as the 7623(b) program after the Internal Revenue Code subsection that authorizes the different award payments. Claims submitted under the 7623(b) program are those that allege a tax noncompliance of over $2 million and are subject to a mandatory award of between 15 and 30 percent of collected proceeds, to be determined by the WO based on the extent of the whistleblower’s contributions. Whistleblowers may appeal an award determination under 7623(b), including the denial of an award, in the Tax Court. Claims submitted under the 7623(a) program are more discretionary: they are not subject to statutory minimum award payments, are not eligible for judicial review of award determinations in the Tax Court, and, prior to 2010, were not subject to the same procedures for award determination as those claims submitted under the 7623(b) program. However, IRS officials announced in an update to the IRM that it would award and evaluate 7623(a) claims received after July 1, 2010 by the same process it uses for the 7623(b) program for claims submitted after the announcement. That is, 7623(a) whistleblower claims received after July 1, 2010 will be paid between 15 and 30 percent of collected proceeds and will be based on the same factors used to determine 7623(b) awards. The Tax Relief and Health Care Act of 2006 also established the WO within IRS, which is responsible for managing and tracking whistleblower claims from the time IRS receives them to the time it closes them, either through a rejection or denial letter or an award payment. The Secretary of the Treasury is required to submit an annual report to Congress on the activities and outcomes of both the original and expanded whistleblower programs. Various functional branches comprise the WO. The Initial Claim Evaluation (ICE) unit receives and records incoming 7623(a) and 7623(b) claims. ICE also alerts whistleblowers to the status of received, incomplete, and denied claims. Strategic Planning and Program Administration has responsibility for overall program management including program analysis, developing operating procedures, and updating the IRM. Award Recommendation and Coordination (ARC) reviews and issues award decisions for 7623(a) claims. Awards for 7623(b) claims fall under the responsibility of Case Development and Oversight (CDO). CDO also evaluates potential 7623(b) claims and coordinates the most complex cases across IRS operating divisions (OD). Figure 1 shows WO staffing levels from fiscal year 2007 to September 15, 2015. The WO has grown since its establishment in February of 2007 and had 61 staff on board as of September 15, 2015. The WO’s workload has also increased over the years (see figure 2). Since fiscal year 2010, the WO has received, on average, over 10,000 claims each year between the 7623(a) and 7623(b) programs. Further, as of May 14, 2015, the office had 30,152 open claims in their workload with more than half of these coming in since the start of fiscal year 2012. Over 95 percent of claims closed between the start of fiscal year 2013 and August 5, 2015 did not receive award payment. Each year most claims are closed early in processing for a number of reasons, including the following: the submission is nonspecific to taxpayer or tax issue, the submission is unclear, or the claimant lacks credibility. The WO’s workload includes the initial vetting of claims and the processing of the small number of claims that result in an award. Claims that pass the initial rounds of review have the potential to be under review for several more years and may not result in any award. For example, a claim may not merit award if the WO determines that the information did not substantially contribute to an IRS action or the information resulted in no collected proceeds. Launched in 2009, E-TRAK is IRS’s whistleblower claims management information system. The WO and the ODs use E-TRAK to track the progress of claims as they move through the review process and to store information on the file. Our prior report found several weaknesses in E- TRAK and its ability to accurately monitor whistleblower claims and produce reportable statistics for management use. According to IRS officials, E-TRAK was designed to be a claim management tool to track claim progress, not a system designed to report and monitor overall program performance. Since 2011, IRS has made several updates to E- TRAK to better capture and report key whistleblower claims data, but according to IRS officials, E-TRAK remains difficult to use as a system for managing WO operations. Initial review and routing: The whistleblower claims process involves multiple steps, starting with a whistleblower’s initial application and ending with a rejection, a denial, or an award payment. The process begins when a whistleblower submits a signed Form 211, Application for Award for Original Information, to the WO. The first stage of the process consists of two steps. First, the WO’s Initial Claim Evaluation (ICE) unit performs an administrative review of the incoming applications. ICE examines the submission for completeness and logs it into E-TRAK, the claims management information system. Second, claims are generally sent to staff from the Small Business / Self-Employed (SB/SE) operating division (OD) where they are reviewed to determine whether the claims merit further consideration by an OD or should be rejected or denied. Claims that are identified as potential 7623(b) claims are sent to the WO’s Case Development and Oversight (CDO) team for further review. At this stage, the WO may reject claims because the tax noncompliance allegation is unclear, no taxpayer is identified, or the whistleblower is ineligible for an award. Claims can also be denied if there is no potential noncompliance found, among other reasons. Claims are then routed to the proper OD for further review, including Criminal Investigation (CI) if there is the potential for a criminal investigation, or are routed to ICE which sends the whistleblower a rejection or denial letter. The Deputy Commissioner for Services and Enforcement has set a 90-day target for completion of these ICE, CDO, and SB/SE review steps. WO data shows 67 percent of 7623(b) claims are processed within this time frame. WO officials said that cases of greater complexity are more difficult to handle and so are more likely to experience delays. In particular, they said that claims that need coordination across ODs because of their complexity are especially prone to delay. Figure 3 summarizes the full claim review process for 7623(b) claims. OD SME review: The 7623(b) claims—those that allege over $2 million in tax noncompliance—that are not rejected or denied in the initial review are generally forwarded to OD subject matter experts (SME) for a more rigorous review. For example, the SME may follow-up with the whistleblower to obtain more information and will evaluate whether the source of the information has the potential to compromise any case developed from it. SMEs may deny claims if insufficient time remains on the statute of limitations, among other reasons. Once claims are routed to an OD, they leave WO control; however, throughout the claim review process a WO analyst may monitor certain 7623(b) claims if the tax issue involves multiple ODs or is particularly sensitive. The Deputy Commissioner for Services and Enforcement has set a 90- day target for completion of SME reviews. However, these reviews may take more than 90 days if the whistleblower’s information relates to a complex tax case or has an international component and requires documents to be translated, among other reasons. SMEs we spoke with generally agreed that they could meet the 90-day target for most claims. The 90-day target is just that—a target. WO and OD officials understand there are several reasons why a SME review may take longer than 90 days. An official from one OD said that if a SME review does surpass 90 days, the WO and OD management may follow up to ensure a claim is not sitting idle and to provide additional resources to the SME if necessary and available. As of August 5, 2015, the WO reports that 67 percent of 7623(b) claims have met this time target since October 1, 2012. As part of their review, SMEs may contact (or debrief) whistleblowers to clarify information submitted with the Form 211. Debriefings also provide an opportunity to set expectations about communication between IRS and the whistleblower and the length of the process. In 2012 and again in 2014, the Deputy Commissioner for Services and Enforcement called for SMEs to debrief whistleblowers unless there is a clear reason not to do so. Debriefings can occur during the SME review or later, after the taxpayer’s examination has started. In both instances, the SME conducts the debriefing. The whistleblower attorneys we spoke with varied in their estimates of how often whistleblowers were debriefed but agreed that IRS debriefs their clients less than 50 percent of the time. SMEs also determine the extent to which the whistleblower provided unusable information: for example, information subject to attorney-client privilege or illegally obtained information that may compromise IRS’s tax case. As warranted, IRS counsel may be involved in assessing the limitations and the risk in using the whistleblower’s information. The SME review, including debriefings and the review for unusable information, is a step unique to a whistleblower claim and can add time to the whistleblower claim process. Claims that are not denied by the SME are added to the OD’s inventory of returns for potential selection. If the OD does not select the taxpayer(s) identified in a whistleblower’s claim for examination, the claim is returned to the WO for denial processing. Selecting the returns largely depends on the merits of the case. To the extent that whistleblower information pertains to a high priority tax issue as specified in the IRS’s annual plan, ODs prioritize returns for examination based on the merit of the tax issue, not on the source of the referral. ODs use whistleblower information to identify issues and to establish leads for obtaining documents supporting tax assessments or collections. They develop evidence independent of whistleblower information to support any tax adjustments or collections action. To mitigate the risk of compromising the tax case, OD guidance specifies the contact with whistleblowers should be conducted through the SME and counsel, as appropriate. Taxpayer examination and appeals: The examination, appeals, and collection process may take several months to several years, depending on the tax issues raised, agreement by the taxpayer, and payment. We were able to collect data on the length of audits for twelve of the seventeen 7623(b) claims that were paid as of June 30, 2015. The length of this process ranged from 6 months to more than 4 years, with an average of 2 years. Audit length can be affected by case complexity, availability of documentation, taxpayer cooperation, and availability of IRS resources. Taxpayer appeals also have the potential to extend the time a claim is open. Taxpayers may pursue an appeal with the IRS Office of Appeals, which generally takes between 90 days and 1 year to complete, or by filing suit in Tax Court, which may take several months to over a year to litigate. After an audit and any appeals, and if a taxpayer does not pay the taxes owed, the case may be sent to collections, where IRS will attempt to collect the outstanding liability from the taxpayer. Similar to audit selection, IRS cannot pursue all outstanding collections cases, and cases are selected based on the merits of each case and not because a taxpayer was the subject of a whistleblower referral. At the conclusion of the examination process, OD staff record the contribution made by the whistleblower on a Form 11369, Confidential Evaluation Report on Claim for Award. The OD sends the Form 11369 to the WO along with any supporting documentation from the OD and with the original documentation provided by the whistleblower. According to IRM provisions and direction from the Deputy Commissioner for Services and Enforcement, the ODs should purge any documentation that identifies the existence of a whistleblower from the taxpayer’s file. Waiting period and award determination: Using information supplied by the OD in the Form 11369, the WO analyst assesses how the whistleblower’s actions contributed to the IRS action and determines whether to recommend an award. If an award is recommended, the WO analyst determines an award percentage, which is applied to the total collected proceeds to calculate the final award payment. However, the WO only calculates the award once collected proceeds are final, which occurs after the expiration of any remaining appeals rights of the taxpayer and the right to request a refund. The refund statute expiration date (RSED)—which extends at least 2 years from the date of the last payment made by the taxpayer to settle the tax liability related to the whistleblower’s claim— is the date at which IRS has what it considers to be finalized collected proceeds. The WO does not pay awards before the RSED to avoid paying an award on proceeds that could later be returned to the taxpayer. In rare cases, the WO may pay an award earlier than this, such as when IRS and the taxpayer sign a comprehensive closing agreement waiving refund rights and immediately finalizing the collection amount. Once the collected proceeds are finalized, the WO calculates the award and sends a preliminary award package to the whistleblower. The Deputy Commissioner for Services and Enforcement sets a 90-day target beginning at RSED for sending the preliminary award package. This package summarizes the award calculations and the factors used to determine the award percentage. Award payment: After receipt of the awards package, whistleblowers have the option to accept the award as calculated; submit comments to the award file; or request a review of the full file to see how the award was determined. The WO will review any comments submitted by whistleblowers before making a final award determination and sending the award payment. Whistleblowers who disagree with how an award is determined can dispute the award in Tax Court if raising concerns with the WO does not provide relief. Exercise of the review or comment options will lengthen the time to award payment. In our review of the 17 paid 7623(b) claims as of June 30, 2015, we found that claims took 4 years to 7½ years from the submission of the Form 211 to the award payment. Much of this time was spent with the WO. For example, from the time the OD completed and sent the Form 11369 to the WO for award evaluation, it took between 1½ to 4½ years for claims to be paid. Only a very small percentage of claims submitted have closed with an award payment. For example, between fiscal year 2013 and August 5, 2015, only 507 whistleblower claims (or less than 5 percent of 7623(a) and 7623(b) claims) closed with an award payment. During the same period 19,757 claims were closed with no payment. As described above, the WO can deny or reject claims at each point in the process before award payment. Table 1 summarizes the frequency of various reasons for claim closures. See appendix II for information on which office or OD made the closure decision. Recent reductions to IRS’s budget have necessitated most divisions and offices make do with fewer resources. The WO is not an exception. While the office has been able to grow its staff since 2007, this growth has not kept pace with the workload. The WO has studied its workflow to identify opportunities to more efficiently use staff through a consolidation of processes. While a number of these proposals show promise, IRS has put implementation on hold while it reconsiders staffing allocations in light of its budget. Because the current process is dispersed across three different units (each with their own staff allocations), any consolidation might save staff days across the agency, but would likely require staff increases wherever the functions were consolidated. As such, decisions on this consolidation need to be made while considering IRS’s overall budget and agency-wide resource allocation. Not implementing these changes has at least partially contributed to workload backlogs—more than 11,000 cases—in three areas: SB/SE’s initial review, the award determination process, and processing rejection and denial letters. First, the WO cancelled plans late in 2014 to bring on additional staff and take over the initial review and routing duties that were being done by SB/SE staff; this contributed to a backlog of approximately 5,000 claims. According to SB/SE officials, the inventory of claims being sent to them for initial review slowed as the WO prepared to take over the work. However, those plans were contingent on IRS’s proposed fiscal year 2015 budget, which IRS did not receive. As the initial review process returned to its original functioning, SB/SE received an influx of more than two times the normal workload of initial review work in both the first and second quarters of fiscal year 2015. According to SB/SE officials, they received over 8,700 claims for initial review, which was more than the typical amount of approximately 2,100 claims per quarter, and this resulted in a backlog. The backlog was further exacerbated by staffing shortfalls. According to WO officials, four to five full-time equivalent (FTE) employees have been historically needed to complete this work. However, according to WO officials, only 1.2 FTEs were available to do the work at times. IRS officials told us that in the past, when staff met the weekly time budgeted for their initial review, they set aside remaining work until the following week. In early 2015, SB/SE brought on 12 revenue agents as 30-day detailees to clear the backlog, but this effort was not entirely effective. First, the detailees were brought on before SB/SE received the influx of claims. These detailees helped in clearing an existing backlog, but were not sufficient in number to clear the new claims. Second, the SB/SE initial review and WO processes have a learning curve and because revenue agents are assigned to the WO for only 30-day periods, they may rotate to new assignments soon after they become proficient. According to SB/SE officials, in June 2015, SB/SE was approved to bring on 23 120- day detailees and expects to reduce the backlog by the end of fiscal year 2015. SB/SE and the WO have not yet settled on a permanent staffing plan for these duties, but WO officials said that they have no immediate plans to take over these duties due to budget constraints. Second, a backlog exists in the award determination step for 7623(a) claims. WO officials cited inadequate staffing resources and the volume of small awards as the cause of this backlog. WO officials report that approximately 25 percent of open 7623(a) claims (about 4,200 claims) were with Award Recommendation and Coordination (ARC) and awaiting review of Form 11369 as of July 28, 2015. This is the same level of backlog reported by WO officials in March 2015. WO officials said they brought in 120-day detailees to assist, but the detailees could not be adequately trained within the 120-day window to clear the backlog. Further, two permanent ARC staff left their positions. WO officials told us they anticipate doing additional hiring to clear the backlog. On September 15, 2015, a WO official told us the office had hired six additional ARC staff and most had already reported for duty, for a net increase in ARC staff to four. Third, the WO has a backlog of approximately 2,500 denial or rejection letters. WO officials said the backlog resulted from the low priority given to denial letters—the WO views getting whistleblower information to the ODs and working on award payments as higher priority. Officials said procedural changes also contributed to this backlog. In August 2014, the final regulations on whistleblower awards became effective, detailing procedures for the denial process. To implement the new regulations, the WO drafted new language for denial letters. While the new language was being developed and approved, the WO ordered a hiatus on new denial notifications for 5 months, which contributed to the buildup of denial letters. Because processing denials is a low priority and staff resources are limited, WO officials told us they do not expect to clear the backlog caused by the hiatus in the near future and did not provide a time period for when they plan for this to be done. WO officials have studied improving the efficiency of the initial review process. The current procedures can create unnecessary processing, which results in an inefficient use of staff time and added cost. As previously discussed, ICE performs the administrative processing work and then passes the claim along to SB/SE for the initial review—that is, SB/SE reads through whistleblower claims to identify the tax matter at issue and determines whether and which OD should work the claim. Each stage of review of a claim requires resources, so rejecting or denying claims at the earliest opportunity would be efficient. WO staff told us that in some cases it was obvious in the mail processing administrative review that some submitted claims were not worthy of pursuit, but such claims are moved along to ensure all claims receive a fair and honest consideration. They told us that these claims came in with only vague insinuations of wrongdoing and that a small number of whistleblowers submit multiple claims of this type per month. Forty-six percent of all claims (9,271 claims out of 20,264 claims) were closed between fiscal year 2013 and August 5, 2015 because the allegations were not specific, not credible, not clear, or did not identify a tax issue (see table 1). However, the process is not set up to allow the WO ICE unit to deny such claims. As a result, the ICE unit ends up performing administrative functions on claims that are likely to be denied when SB/SE staff do their initial review. The result is added costs and time to the overall review process because claims known to be of poor quality are allowed to advance in the process. The WO has also studied making use of the expertise of the OD in trying to realize efficiencies in the claim review process. Currently, SMEs closely scrutinize claims for the significance of the noncompliance as well as the usefulness and completeness of the information provided. As of August 5, 2015, they referred on average about 88 percent of claims received for examination. Because the OD’s expertise is an important component to the scrutiny a claim receives, the WO’s plans for consolidation have considered more extensive use of the OD. Table 2 summarizes the claims received by the SME as well as those then referred to examination. WO officials recognize that the staffing arrangement in place does not match the organization’s functions and have studied their workflow and proposed options for revising the process. One option involves making the administrative review more substantive by consolidating SB/SE’s review into it. The proposed initial review within the WO would evaluate claims to determine if they merit examination and route those directly to the ODs. This proposal would eliminate the need for the SB/SE detailees. Additionally, WO officials expect that implementing the proposed process could be more efficient, could reduce the opportunities for unnecessary or inappropriate disclosure of whistleblower information, and could help the WO retain greater control over the process, among other benefits. Another proposal entails routing incoming submissions directly to the OD SMEs. This proposal would give the WO an administrative role in the claim receipt process. Each OD would then evaluate claims to see whether they merit examination and route such claims to the examination teams’ selection inventory. This proposal aims to give ODs greater autonomy to evaluate and direct the claims they will work and would reduce the number of review steps as a result. However, it will require additional FTEs for the ODs. Increasing efficiency can help government make better use of scarce resources. IRS has not decided which plan, if any, to implement and WO officials said that there were no current plans to advance any of them at this time. WO officials said that due to recent changes in WO management and ongoing reviews, near-term changes are unlikely. However, on August 26, 2015, the new director said additional studies are being conducted in several areas including those described above. Until a plan is in place, the WO risks delaying opportunities for improving the efficiency and quality of its reviews. As noted earlier, the IRS only pays awards once a taxpayer waives all rights to appeal or once these rights expire. Taxpayers have 2 years to file a request for refund from the date the tax was paid or 3 years from the date the return was filed, whichever is later. Except in the rare cases where a waiver is made, the RSED marks the date on which the WO is first able to calculate finalized collected proceeds. Despite the importance of this date in the claims process, the WO does not automatically track it in its claims management information system, E- TRAK—even though E-TRAK has a data field capable of tracking such information. WO officials told us that they do not use this E-TRAK field consistently and therefore cannot run reports showing upcoming RSEDs. They said that E-TRAK was designed with more capabilities than the WO uses in practice. Instead, WO analysts manually compare the cases in their workload against a paper list of approaching RSEDs that the WO generates on an annual or semiannual basis. One WO analyst described his own method of tracking RSEDs since it is not done consistently in E- TRAK. WO officials said the cost of tracking RSEDs in E-TRAK going forward would be minimal. The lack of consistently tracked RSED information in E-TRAK complicates the monitoring of award payments. If analysts use their own methods to track RSEDs without documenting them, it is not possible for supervisors to oversee their work or for another analyst to complete their work in their absence. Our review of paid 7623(b) claim case files found that the tracking procedures used can delay award payments. In some cases, WO analysts documented difficulty in meeting timeliness targets. Not recording the RSED in E-TRAK also means the WO cannot know if it is meeting its performance goals or whether it is unnecessarily delaying payments. As discussed earlier, the WO has a 90-day target for sending award recommendation packages from the date that IRS can determine finalized collected proceeds (generally the RSED). Because the WO does not track the date collected proceeds are finalized against the date of mailing the award recommendation package, it cannot assess its performance against the timeliness target. Since fiscal year 2007 (when the 7623(b) program was established), IRS has collected more than $2 billion from both the 7623(a) and 7623(b) programs. As of June 30, 2015, IRS had paid awards to whistleblowers on 17 high-dollar claims of the 7623(b) program. IRS paid the first of these claims in 2011 and expects to pay several more claims by the end of fiscal year 2015. These 17 high-dollar claims accounted for over $1 billion in collected proceeds. About half of these claims each had collected proceeds over $10 million. Since 2011, 7623(b) claims constituted about 55 percent of all proceeds collected but accounted for less than 4 percent of the number of whistleblower awards paid (see table 3). Whistleblowers are awarded a percentage of collected proceeds ranging from a minimum of 15 percent to a maximum of 30 percent. Of the 17 high-dollar awards paid through June 30, 2015, the majority were at the 22 percent of collected proceeds level with several at the 30 percent maximum level. Almost 17 percent of the total collected proceeds for both programs were paid out as awards. The characteristics of the 17 paid 7623(b) claims varied. Over half of the claims had attorneys representing the whistleblower. Most but not all whistleblowers had a professional relationship with the taxpayer involved in the underpayment. Claims involved tax issues such as not properly reporting income, unreported offshore accounts, and employment tax. To determine whistleblower awards, the WO assesses the extent of the whistleblower’s substantial contributions to a case. This process is inherently subjective as there are no quantifiable measures for whether a contribution is substantial. IRS regulations describe the process for calculating awards and identify the positive and negative factors used as criteria for award calculation. These factors, as shown in table 4, focus on the merits of the whistleblower information and whistleblower behavior. The factors are not weighted or exclusive; for example, other factors can be considered in certain circumstances. Generally, IRS first assesses the presence and significance of positive factors to determine whether the award should be increased from the 15 percent minimum to 22 or 30 percent. Then, IRS considers the presence and significance of negative factors to determine whether the award percentage should be decreased. As described earlier, when ODs close out an action related to an investigation, examination, or collections, they document whistleblower contributions to the tax case on the Form 11369. Complex tax cases comprising different tax issues may involve several audit teams (for example, a case could have both international and domestic components); each team provides separate documentation of the whistleblower contributions to that particular issue. The WO uses all the information to derive a final list of positive and negative factors. In our review of 7623(b) awards, we found documentation showing two levels of supervisory review in 11 of the 17 awards. A WO supervisor and the WO director reviewed and approved the WO analyst’s initial award recommendations. WO guidance does not explicitly require supervisory review prior to the director’s concurrence; however, according to a WO official, WO processes entail such review of the WO award recommendations. It is not clear whether the remaining 6 cases had not been reviewed or such reviews were not clearly documented. The lack of quantifiable criteria for determining whistleblower awards subjects the WO award process to potential inconsistency in making awards. Different WO analysts may arrive at different award determinations based on the same set of whistleblower contributions identified by the OD. The WO mitigates some potential inconsistency by using broad award categories. According to a WO official, few 7623(b) awards were made each year, so WO analysts generally discussed each one to try to ensure consistency in the award percentage recommendation. Current WO procedures require the WO director to approve all 7623(b) awards and 7623(a) awards over $1 million, which reduced inconsistency. However, should the volume of high-dollar claims reaching the award determination stage increase, as the WO expects, the consistency of the award determinations may be affected. During the course of our case file review of the seventeen 7623(b) awards, we found that the WO made errors in calculating whistleblower awards and communicated incorrect award information to whistleblowers in five cases. The WO identified and corrected three errors by reissuing revised letters to the whistleblower prior to award payments. In a fourth case, a refund check related to tax withholding was issued and then cancelled. In the fifth case, we found the WO had miscalculated the award and had not taken all relevant assessments, penalties, and interests into account when determining the collected proceeds, resulting in an incorrect payment by IRS. According to the WO, they did not use tax assessment software in this case which resulted in a simple math error. A WO official said the WO has been using tax assessment software to determine award amounts since early 2014, so math errors should no longer occur. Subsequent to our query about this case, the WO verified all 7623(b) awards and identified two other potential errors. For two of these three cases, the WO issued supplemental awards. In the third case, IRS concluded that additional resources were not warranted to pursue the overpayment. In total, these award errors amounted to approximately $100,000. To mitigate errors in determining collected proceeds, the WO said it has changed its procedures to include a review of the taxpayer account at the time of award payment to verify that any and all relevant changes in tax assessment, penalties, interest, and other additional amounts have been taken into account. While the award percentage can be determined soon after the OD closes its investigation, examination, or collection action, the award amount depends on the collected proceeds. Collected proceeds are subject to change until there is a final determination of tax: specifically, until the statute of limitations for the taxpayer to claim a refund expires or until there is an agreement between the taxpayer and IRS that waives the taxpayer’s right to file a claim for refund. On July 27, 2015, the WO e-mailed information about the new procedure to WO staff who process award payments for 7623(b) claims, but did not disseminate the e-mail to everyone in the WO, including those who process 7623(a) awards until we brought the oversight to their attention on September 1, 2015. According to WO officials, the WO will be issuing an official procedural change, but as of September 1, 2015, it has not done so. An integral component for the reasonable assurance of the effectiveness and efficiency of operations is through an organization’s policies and procedures, including those for review and ensuring accountability. As previously noted, the WO has not only communicated erroneous award information to whistleblowers, it has also made errors in calculating award payments. Without additional documented procedures, such as having an additional management or supervisory review of the preliminary award recommendation and award letters prior to their issuance, as well as verifying the collected proceeds at the time of award payment, IRS will remain vulnerable to communicating incorrect award information and making erroneous award payments to whistleblowers. This is especially important given that that IRS expects to make more high-dollar award payments in the near future. A key communication tool that the Secretary of the Treasury and IRS use to inform Congress and the public about the WO and 7623(a) and 7623(b) programs is the WO’s annual report to Congress. This report is required by the Tax Relief and Health Care Act of 2006, the same legislation that established the WO and the 7623(b) program, and requires the Secretary of the Treasury to conduct an annual study of section 7623 programs, and to include in the annual report legislative or administrative recommendations for the program. This report provides IRS with an opportunity to discuss these programs’ operations, challenges, outcomes, and statistics. There are no requirements for when the report should be issued or what data the report should include. The timing of the release of the annual reports has raised some concern. The fiscal year 2013 annual report was issued over 6 months after fiscal year 2013 ended and the fiscal year 2014 report was released over 8 months after the end of fiscal year 2014; the fiscal year 2014 report was not released to the public via www.irs.gov until July 6, 2015. According to WO officials, the WO had compiled the FY 2014 report and data by mid- December 2014, but the report could not be released until it was reviewed within IRS, the Department of the Treasury (Treasury), and the Office of Management and Budget (OMB). According to WO officials, IRS sent the report to Treasury in March 2015, and the report cleared both Treasury and OMB review on June 4, 2015. According to WO officials, these reviews resulted in editorial but not substantive changes to the report. The WO updated statistics in both of these reports before issuing them. For example, the fiscal year 2014 report includes data as of May 14, 2015. According to WO officials, the WO provides the updates so the reports reflect the status of claims closer to the report’s issuance, but a drawback to this approach is that the annual reports do not provide a consistent snapshot of what occurred with the whistleblower program in a given fiscal year. As a result, monitoring the program’s operations and results from year to year may become increasingly difficult for congressional oversight committees and the whistleblower community. Furthermore, last minute changes to the report have introduced discrepancies. For example, in the fiscal year 2014 annual report, the data in the executive summary reflects the data through December 2014, while the data tables in the report body reflect the updated May 2015 data. Such discrepancies (which can cause confusion with readers) could be avoided if the reported data were as of the end of the fiscal year or the reports were issued closer to the end of the fiscal year. In addition to questions about the annual report’s release date, concerns remain regarding the content and presentation of data in the annual report. We reported in August 2011 that the annual reports’ data were limited and, for example, did not contain information on processing times or reasons why claims were rejected. We recommended that IRS include more information and statistics in these reports, which IRS has done with subsequent annual reports. However, based on discussions with program stakeholders (including some in the whistleblower community), additional information in the annual report could make it more useful. A key standard of internal control is communication; management should ensure there are adequate means for communicating with external stakeholders who may have a significant impact on the agency achieving its goals. Stakeholders, including congressional staff and some in the whistleblower community, noted that the annual report can be difficult to understand because tables are poorly labeled and terminology is not fully explained. For example, annual reports since 2013 have included several data tables on the reasons for claims closures, number of claims received, status of open claims, and award payments. However, these tables do not use a common denominator to enable the reading of information across tables. One table shows claims received for both 7623(a) and 7623(b) programs, while another table shows the status of claims for 7623(b) claims only. Award payment information is provided by number of award payments and readers must look to another table and past reports to see how many claims are closed as paid. Further, the tables include whistleblower claim process steps that are not fully explained and can be confusing. In addition, the annual reports do not provide data on the amount of time it takes to process either 7623(a) or 7623(b) claims from claim submission to award payout. The report’s one table on timing only provides a snapshot of how long 7623(b) claims have been in a current status as of a date on which the snapshot was taken. This table does not provide information for how long an average claim spends in each status; however, readers may incorrectly interpret this table as showing the overall average time for claims to move through the process. The table also shows the longest and shortest days for claims in each status. This information can be misleading; readers may interpret the shortest days column to mean that it is possible for a claim to spend one day in a given status when in fact the data only shows that at least one claim had been in the status only one day when the snapshot was taken. IRS attributes some problems with data in the annual reports to E-TRAK, as well as to other spreadsheets used to track whistleblower claims information. As previously described, E-TRAK was not originally designed as a management reporting system and therefore was not developed to easily run certain data reports. The WO has made changes to E-TRAK over time to allow for more robust data reporting, but the WO still has problems reporting certain data in the annual reports. For example, the WO has added some status fields to better describe and differentiate where claims are in the review process; however, when these fields were added, the WO did not always have the time or resources to update claims that were already in process to the new status categories. As a result, the annual reports have included a mix of data using the old and new statuses that can be difficult to aggregate and interpret. In addition, the WO is not using the full capabilities of E-TRAK; according to WO officials, some information that is reported in the annual reports (such as total collected proceeds) is maintained in separate spreadsheets that do not feed into or from E-TRAK data. In the course of providing us with data on award payments, for example, WO officials said they discovered that this spreadsheet was not updated to properly reflect the total collected proceeds associated with award payments. As a result, WO officials stated that the total collected proceeds reported was overstated and that awards paid as a percentage of collected proceeds was understated in all annual reports dating back to at least the fiscal year 2011 annual report. According to WO officials, this error will be corrected for the fiscal year 2015 report for historical data and procedures are in place to ensure subsequent data will be correct. The WO annual report could be more useful to stakeholders and the whistleblower community if the WO can provide reliable data showing progress and changes across years in comparable timeframes. IRS is limited in what information it can share with whistleblowers and other stakeholders throughout the whistleblower claim process. Section 6103 of the Internal Revenue Code prohibits the unauthorized disclosure of tax information. A violation of section 6103 can lead to civil and/or criminal penalties, including imprisonment of up to 5 years. IRS takes section 6103 very seriously; IRS does not share information with a whistleblower about what may be happening with their claim if such information could reveal taxpayer information, such as confirming that the taxpayer in question is being audited. As a result, the WO policy on providing whistleblowers updates on a claim’s status is to state only whether the claim is open or not. Until a rejection or denial letter has been issued or an award payment made, all claims are considered open. Given this policy, it is possible for a whistleblower to not hear anything from the WO for several years. Because information provided by the WO is so limited, whistleblowers can and do draw conclusions about progress on their claims from the contact they do have with IRS or the taxpayer. Some whistleblowers and whistleblower attorneys we spoke with told us they had been confident in knowing what was occurring in their case because of insider information from taxpayer sources, public information, or inference from limited communications with the WO, but some were later surprised to have their claims denied. The Office of Chief Counsel officials we spoke with stated that whistleblowers may infer from IRS’s silence that action may be happening on their claim. However, if the WO were to communicate information sufficient for the whistleblower to infer confidential taxpayer information, then the communication could violate section 6103. Whistleblower program stakeholders have voiced concern about this limited communication. They reported frustrations with the limited information IRS is willing to share with whistleblowers, especially those that have risked their careers or safety to be whistleblowers. Some whistleblower attorneys we spoke with stated that they are accepting fewer IRS whistleblowers as clients or have stopped taking on such clients altogether due to their frustration with the program. These concerns include limited communication by IRS while claims are open, limited communication about the process, and limited information about what makes a good claim. As such, IRS may risk missing out on recovering significant tax revenue when whistleblowers decide not to come forward with information. Officials in the WO also cited frustration with section 6103 limitations. Officials said that they are well aware of the criticisms of the whistleblower program, including allegations that IRS has an anti- whistleblower attitude attributed to the Office of Chief Counsel, but section 6103 restrictions prevent WO and IRS management from directly answering or rebutting some of these criticisms. IRS officials said they cannot comment on pending litigation and cannot reveal facts of specific whistleblower claims to explain certain outcomes. Furthermore, Office of Chief Counsel officials told us that the office’s role is to represent the IRS and provide fair and impartial legal advice, which does not entail responding to outside criticism on behalf of the agency. IRS has the authority to disclose some information with whistleblowers under several subsections of section 6103 of the Internal Revenue Code. Specifically, the WO has the authority under section 6103(h)(4) to share information with whistleblowers during the award determination phase and under section 6103(k)(6) to share information with a confidential informant, which is typically used by CI. IRS also has the authority to enter into a contract with an individual to share information for the purposes of advancing tax administration, under section 6103(n). However, according to IRS officials, IRS has not yet used these section 6103(n) contracts for whistleblowers. Under a section 6103(n) contract, a whistleblower can, for example, work with IRS to help an IRS examiner understand an alleged tax issue. IRS officials have publicly acknowledged the possibility of using section 6103(n) contracts with whistleblowers, as well as the benefits of doing so. For example, in an August 2014 memo to the commissioners of the ODs, IRS Deputy Commissioner for Services and Enforcement stated that, with appropriate controls, a section 6103(n) contract may be used when disclosure of taxpayer information is necessary to obtain a whistleblower’s insights and expertise on complex technical or factual issues. This memo mirrored language from a June 2012 memo. IRS has not yet used this provision for the whistleblower program because, according to the WO and OD officials we spoke with, there has not yet been a case where an exam team needed this provision to investigate and build a case. We spoke with several IRS officials from various divisions and offices, (including the SB/SE, LB&I, and TE/GE ODs and the Office of Chief Counsel) about the processes for evaluating the need for a section 6103(n) contract and the process for requesting authorization to enter into such a contract. According to some of these officials, there are no criteria or guidance for determining the need for a contract other than the language of section 6103(n), related regulations authorizing their use, and the Deputy Commissioners’ memos. The IRM sections related to the whistleblower programs include one short section on 6103(n) approvals, stating they may be used when in the best interest of the government and that the purpose must be obtaining services for tax administration. A prior version of the IRM stated that these would be used in rare circumstances. The request for a section 6103(n) contract starts at the examiner level and, if approved by the responsible executive, must be authorized by the OD at a level no lower than the Deputy Commissioner. We did not identify any additional guidance specific to the process for requesting and approving a section 6103(n) contract within two of the three ODs that handle the majority of whistleblower claims. Without clear guidance for all examiners, IRS staff may not be using a resource available to them that could potentially save IRS time and other limited resources. Whistleblowers and the attorneys we spoke with argue that such contracts could provide IRS with free help to analyze complex tax issues and provided several examples of how such contracts could benefit IRS. For example, one attorney said a whistleblower who has inside information about the taxpayer could alert IRS if the taxpayer were fabricating documentation during an examination. However, because IRS has never entered into a section 6103(n) contract with a whistleblower, IRS does not know whether it may be missing opportunities to collect additional tax revenue. The WO developed a fiscal year 2015 communications plan aimed at improving how the office communicates within IRS and with the whistleblower community. According to WO officials, this is the first finalized and documented communications plan for the WO. One key message to the whistleblower attorney community that this plan targets is how to prepare a complete submission with documentation that will assist in the processing of the claim. The WO published two fact sheets to communicate key information about the program to whistleblowers focused on the whistleblower claim process and how to submit a claim for an award. WO officials told us they developed the content of the fact sheets based on common questions they receive from whistleblowers and their attorneys. The WO limited the fact sheets to one page each. Our review of the published fact sheets found they did not include some key information relevant to whistleblowers and did not provide much new information beyond what is already available on www.irs.gov. For example, the fact sheets do not provide any examples or specific explanations of what is meant by key terms used in the award determination process, particularly denials. The fact sheets (as well as information posted on www.irs.gov and included in the annual report) do not include a full description of the claim review process, such as a detailed step-by-step guide that includes timeframes for steps in the process. Additionally, the fact sheets do not include information about key taxpayer rights and how much time that taxpayer appeals may add to the review process, which could be helpful to the whistleblower community’s understanding of the WO’s processes. The fact sheets state that claims can take 5 to 7 years to complete the review process without providing any additional information about why it may take longer. Other information not included and that could be helpful according to whistleblower attorneys we spoke to, is suggestions for the types of documentation to include with a claim submission. We spoke with whistleblowers and attorneys specializing in IRS whistleblower claims who were unaware of some key processes in the whistleblower claim cycle or how long the review process can take. Some stated that the information available from IRS about the program is limited so they rely on other resources—such as discussions with other whistleblower attorneys—for information. For example, according to one whistleblower attorney, several whistleblower attorneys have periodic teleconferences to compare experiences and share information learned about the program. One attorney who participates in these calls said they have reached out to the Director of the WO to request he participate in a call to discuss what attorneys can do to improve the quality of their clients’ Form 211 submissions; the Director of the WO has neither accepted nor declined the invitation. Attorneys also told us they have looked to whistleblower litigation to learn more about the whistleblower process. One attorney we spoke with stated that he appeals most claims because the discovery phase of litigation is where he learns the most about the program and how to improve future Form 211 submissions. Program managers should ensure there are adequate means of communicating with external stakeholders who may have a significant impact on the agency achieving its goals. As the administrator of the whistleblower program, the IRS should be the primary source of information about the program. While the WO is taking steps to improve communication with whistleblowers, it is missing an opportunity to provide information to the whistleblower community that could potentially reduce burden on the WO and alleviate workload. The fact sheets do not address some of the key areas of concern for existing and potential whistleblowers, including estimated timelines for steps in the review process, guidance on best practices for submitting a successful Form 211, and specific examples of the various reasons why claims are denied. Publicizing such information may also alleviate burdens on the WO because whistleblowers may better understand the process. In March 2015, the WO initiated a pilot program of sending annual letters to a sample of whistleblowers with open claims to let them know that their claim remains open. The goals of this program are to proactively communicate with whistleblowers and to reduce the frequency of letters, e-mails, and phone calls from whistleblowers inquiring on the status of their claim. The pilot sample included whistleblowers whose claims had been open for at least 3 years and where the WO was not expecting to take action in the short-term (i.e., 3 months or less). The letters did not include any new information for the whistleblower, and state that the WO cannot share further information about the claim. The letters simply stated that the whistleblower’s claim remained open. WO officials told us they will collect benefit and cost information about the pilot for several months before deciding whether to continue it. However, WO officials also told us they did not have a formal plan for assessing these costs and benefits. A benefit-cost analysis is the OMB’s recommended technique for a formal analysis of government projects, and is used to aid agencies in determining whether a project is appropriate when compared to alternative options, including the option of not having the project. For such an analysis to be useful to decision makers, it should include, among other elements, a comprehensive enumeration of the different types of benefits and costs needed to identify the full range of the project’s outcomes. According to WO officials, the WO’s initial assessment of the program shows that it has taken 100 staff hours to send pilot letters to approximately 180 whistleblowers updating them on approximately 360 open claims. WO officials stated that a significant portion of the time spent on the pilot program was spent confirming data in E-TRAK. WO staff researched each whistleblower claim in the pilot to confirm the status and the mailing address of the whistleblower. WO management stated that this process improved the WO’s data by identifying claims that were not properly updated in E-TRAK and by identifying incorrect addresses through returned (undeliverable) mail. WO officials estimate that it would take three staff years to send letters to all eligible whistleblowers, but WO officials expect that the time needed for annual letters would decrease as the office realizes labor efficiencies and improved E-TRAK data from previous years’ letters. The WO does not have plans to reach out to pilot letter recipients to determine if they found value in receiving the letter or if they had other feedback about the pilot program. The officials stated that sending a survey (or otherwise reaching out to these whistleblowers) would be counter to the intent of the program, which is to reduce the incoming calls and correspondence. The WO plans instead to monitor the number of phone calls and correspondence from pilot participants requesting information about their claim to determine if the letters deterred whistleblowers from contacting the WO. Further, WO officials said they are monitoring press coverage and internet message board discussions about the pilot letters. Internet postings indicate that whistleblowers are frustrated by the lack of information in the letters. Most of the whistleblower attorneys we spoke with—at least one of whose clients were part of the pilot sample—generally did not think the pilot letters were useful because they did not include a meaningful update on the status of their claim. While some of the whistleblowers and attorneys we spoke with said they appreciated IRS’s attempt to improve communication with whistleblowers, the letters did not communicate anything new, and they already knew their claims were open because they had not received award or denial letters. However, the sample of whistleblowers we spoke with (and those that comment on the internet and in the press) may not be fully representative of the whistleblower community’s views. Whistleblowers that wish to remain anonymous may not express their opinions of the pilot program for fear of being identified as a whistleblower. Without attempting to hear directly from those receiving the letters, the WO is unable to fully identify and measure the benefits of the program. Benefit information may be skewed toward the opinions of vocal whistleblowers. The protection of a whistleblower’s identity is of utmost importance to the success of the program. Without such protections, some whistleblowers may not come forward to IRS out of concern or fear for their employment or safety, according to whistleblowers and their representatives that we interviewed. As outlined in the IRM and in Treasury regulations, IRS will protect the identity of the whistleblower to the fullest extent permitted by the law. In instances where revealing a whistleblower’s identity is essential to the pursuit of an examination or investigation, IRS guidance states that it will make every effort to consult with the whistleblower before deciding whether to proceed with the case. Also, IRS neither confirms nor denies the existence of a whistleblower in a case if asked directly by the taxpayer. While the WO has procedures and policies in place to protect whistleblower identities, we found instances of whistleblower identities being at risk for disclosure. In at least one case, a whistleblower’s identity was improperly disclosed by a former IRS employee who later pled guilty to several charges, including some related to the handling of the audit pertaining to the whistleblower’s claim. First, in our review of closed claim files we found that the WO has sent sensitive whistleblower mail to incorrect addresses. The WO maintains whistleblower addresses separately from taxpayer addresses in the event that the whistleblower wishes to have WO mail sent to an address other than the one used to file their tax returns. One whistleblower attorney said he recommends having the WO send all of his clients’ mail to the attorney’s office to minimize the risk of misdirected mail. Even so, we saw instances in our file review of the WO sending mail directly to whistleblowers even after receiving requests to send all mail to the attorney. Further, on at least one of these mailings, the IRS Whistleblower Office was named on the return address. Such errors could have consequences for whistleblowers, including having their identity as a whistleblower disclosed. In addition, whistleblowers’ rights could be impacted because communication from the WO contains time-sensitive material. For example, the whistleblower’s right to respond to the WO on preliminary award recommendations or the right to appeal final award determinations with the Tax Court must occur within 30 days from the date the notice is sent by the WO. WO officials told us that as of mid-2012, the office no longer identifies the IRS Whistleblower Office on return mail labels, although there is no written policy. WO officials recognized that whistleblowers were uncomfortable with receiving mail that is clearly labeled as coming from the IRS Whistleblower Office. According to these officials, mail sent from the ICE unit in Ogden, UT (such as acknowledgement letters) lists only a building identifier, and mail sent from other locations (such as a field office where only one WO analyst works) is sent using the analyst’s address and last name only. Given the volume of correspondence the WO sends as well as the recent hiring of staff and plans for additional hires, having a written policy would strengthen the WO’s protections of whistleblowers’ identities. The WO requests whistleblowers update their address with the WO whenever they move, which can be a common occurrence given the several years it can take for a claim to complete the review process. Given that communications from the WO to whistleblowers are limited, there are few opportunities to remind whistleblowers about this requirement. Additionally, as of July 30, 2015, the WO’s website did not include clear instructions for whistleblowers needing to change their address. According to WO officials, it is possible that some whistleblowers updated their address with IRS for their personal account and erroneously thought the WO would also update E-TRAK with their new address information. As a matter of protecting whistleblowers, the WO does not default to using the whistleblower’s personal address. WO officials told us they had not considered creating a change of address form specific to whistleblowers. A change of address form specific to whistleblowers could help alleviate some of the confusion whistleblowers face when they change their addresses with IRS. However, according to WO officials, such a form could create more administrative work for the WO if it is mistakenly used by non-whistleblowers, such as if a taxpayer wanting to update their personal account uses a whistleblower address change form instead of the one intended for taxpayer files. Even in instances when a whistleblower did submit the proper address change request to the WO, E-TRAK was not always updated accordingly. WO officials told us that updating address change information in E-TRAK can be challenging and time consuming. For example, if a whistleblower submits multiple, unrelated Forms 211 alleging noncompliance by multiple taxpayers, the WO opens a claim for each in E-TRAK. If the whistleblower moves or requests that all correspondence be sent to an attorney, each of the whistleblower’s files in E-TRAK must be updated to include the new address. WO officials said they were in the process of developing an update for E-TRAK to allow for global updates to all files for a particular whistleblower. As of July 28, 2015, an E-TRAK update has been rolled out on a limited basis that allows for bulk updates of addresses, among other things. According to WO officials, the new function in E-TRAK is working well based upon initial testing with a sample of bulk claims. Second, we found instances where whistleblower information was not returned to the WO after an exam was closed. According to the IRM and other WO guidance, information identifying the name or existence of a whistleblower should be protected, especially from the taxpayers the whistleblowers include on their Form 211 submissions. When the WO sends claims to the ODs for review and examination, the file contains a cover sheet that instructs the OD how to handle whistleblower information. For example, whistleblower information should receive special security protections and the exam team should never disclose to anyone the name of the whistleblower or the fact that IRS is in possession of whistleblower information. All information provided by the whistleblower is to be returned to the WO at the conclusion of any audit activity with the Form 11369. Additional instructions are included on the Form 11369 reminding the exam team of the proper procedures for safeguarding whistleblower information and returning it to the WO. Despite these procedures, we found instances where whistleblower information (including the name and Social Security number of some whistleblowers) was retained in exam files in breach of IRS policy. As we reported in July 2015, three of the eleven TE/GE closed case files we reviewed that originated from a whistleblower referral included information identifying the whistleblower, such as by name or Social Security number. Additionally, two others included information pointing to the existence of a whistleblower, albeit not the identity. TE/GE officials said that as of September 25, 2015, the five identified cases had been redacted in accordance with the policy outlined in the IRM. WO officials also cited other instances where they became aware of improper storage or retention of whistleblower information and documentation in the ODs. SB/SE officials said that as of September 1, 2015, they were in the process of doing more training with staff on the issue. LB&I officials said they send a memo to examiners that directs them to the LB&I website where there is guidance and training on closing claims and protection of whistleblower information. Because whistleblower cases have the potential to be challenged through legal proceedings, these protections are important. Without such file segregation, a taxpayer could potentially identify a whistleblower in documents received during discovery for a refund action or other tax-related suit in federal court. These breaches in protecting whistleblowers’ identities highlight weaknesses in IRS’s internal controls over whistleblower information and records. As a part of internal control, management should limit records access to authorized individuals and should assign and maintain accountability for their custody and use. Current procedures call for the examiners to retain separate taxpayer examination and whistleblower claim files and to return the whistleblower claim file to the WO at the conclusion of the examination to ensure proper handling of whistleblower information. If reviews of these files are either not occurring or are not effective, whistleblower information and the public’s trust in the program are at risk. Additional controls—such as a specific management sign off on files before they are closed in the OD—could reduce the risk of whistleblower identities being inadvertently disclosed. Unlike other whistleblower programs, there is no law protecting tax whistleblowers against retaliation from their employers. A tax whistleblower that is discharged, demoted, suspended, threatened, harassed, or otherwise retaliated against by their employer because they provided information to the IRS has no cause of action to bring a lawsuit in federal court. Other whistleblower award programs (such as those created under the False Claims Act or Dodd-Frank Act) provide legal recourse to such retaliatory practices. Under those statutes, whistleblowers have a right to file a claim in U.S. district court for relief from retaliatory actions, including reinstatement of their job, back pay, and other damages. Many of the tax whistleblower attorneys we spoke with also provide legal services to individuals bringing suit under the False Claims Act. Those attorneys commented that not having statutory relief from retaliation for tax whistleblowers puts these clients at risk of adverse actions by their employers if it becomes known that they are a whistleblower. Some tax whistleblowers we spoke with also noted that they had suffered from negative consequences at work—such as being denied a promotion or being demoted—as a result of their whistleblower status. Some level of legal protections may provide additional assurances to potential whistleblowers and could encourage those with high-value inside information about tax noncompliance to come forward. It is the IRS and WO’s goal to protect the identity of the whistleblower. IRS officials support statutory retaliation protections for tax whistleblowers. IRS officials told us, however, that such protections should be remediated outside of IRS. The Secretary of the Treasury has put forth legislative proposals for retaliation protections for whistleblowers in IRS’s fiscal years 2014, 2015, and 2016 Congressional Budget Justifications, and has discussed them in recent WO annual reports to Congress. In these proposals, Treasury estimates these protections would not impose additional costs on IRS. As of August 31, 2015, no bills providing protection to IRS whistleblowers from retaliation by their employers had been introduced by the 114th Congress. For the whistleblower program to be successful in helping IRS enforce the tax code, encourage voluntary tax compliance, and reduce the tax gap by collecting revenue that could have otherwise gone uncollected, whistleblowers need to have confidence in the program’s processes and outcomes. Since we last reported on the 7623(b) program in 2011, IRS has made hundreds of millions of dollars in award payments to whistleblowers and is in the process of evaluating thousands more claims for potential award payments. The WO has also made several improvements in the way it collects and reports information, making the program more transparent. However, IRS and the WO could make additional changes to improve timeliness, ensure the accuracy of award payments, expand communications, and increase protections for whistleblowers. Identifying and addressing inefficient processes should be a priority for the WO, especially in the wake of recent budget cuts and curbs to the WO’s hiring plans. In order to maximize the benefits of information provided by whistleblowers, the claim review process needs to efficiently process useful information. Given the volume of claims received, even small increases in efficiency can improve timeliness of claim reviews and can free up WO resources to clear backlogs of other work, such as issuing denial letters. Whistleblowers need to have confidence that their awards have been calculated fairly and correctly. While WO officials said they have instituted a new policy to prevent the sort of errors we found in our review, documenting the new policy and disseminating it to all staff in the WO is essential to ensure everyone is aware of and has access to the policy. Effective communication with Congress and the public is also critical to the program’s success. The annual report to Congress serves as the WO’s opportunity to provide a comprehensive overview of what the WO accomplished and what challenges it faced in the prior fiscal year. Presenting Congress with comprehensive, reliable, and clear data in a timely manner will help Congress provide effective oversight. Further, providing the public with a complete and accurate picture of how the WO and the 7623 programs operate can bolster the public’s trust in the program. Finally, whistleblowers need assurances that their information and identities are protected. Strengthening controls in areas such as mailings and file retention can further prevent accidental disclosures of whistleblower information that could bring them harm. Additional protections against retaliation from employers could further boost whistleblowers’ confidence in the program and encourage more insiders with information on significant tax underpayments to come forward. To further encourage whistleblowers to provide information to IRS about serious tax noncompliance and to protect whistleblowers, Congress should consider legislation that would provide protections for tax whistleblowers against retaliation from their employers. We recommend the Commissioner of Internal Revenue direct the Whistleblower Office Director to take the following eight actions: 1. Implement a staffing plan for streamlining the intake and initial review process to make more efficient use of staff resources. 2. Record refund statute expiration dates (RSED) in E-TRAK and monitor expiration dates routinely so that the award payment process can start as soon as the claims are eligible for payment. 3. Strengthen the procedures for calculating award amounts and for the issuance of the preliminary award recommendations and award letters to whistleblowers. Such procedures should include, at minimum, a documented process for: supervisory review prior to the director’s concurrence, verifying collected proceeds prior to an award payment for both the 7623(a) and 7623(b) programs, and reviewing preliminary award recommendation and award letters to the whistleblower prior to their issuance. 4. Provide additional information in the annual report to Congress to better explain the statistics provided and the categories of claim review steps reported. Specifically, the report should include correct, reliable data that reflect only the activities of the fiscal year of the report; describe all status categories and clearly identify claim type in the tables; and include an overall timeliness measure (by providing an average and range) to show how long claims take to go from submission of Form 211 to closure decision. 5. Develop an additional or revised fact sheet about the whistleblower claim process and/or publish additional information on the IRS website. Such information should include an outline of the entire claim review process, with an average time or time range for the various review steps; a description of the key taxpayer rights that a taxpayer may exercise and how much time this may add to a claim’s review; examples to illustrate common circumstances that result in denials; and items to include in a Form 211 submission, and suggestions for the types of documentation that are particularly helpful to the WO. 6. Develop a comprehensive plan for evaluating the costs and benefits of the pilot annual status letter program, including obtaining feedback from whistleblowers in the pilot regarding the usefulness of the letter. 7. Establish a process to ensure whistleblower addresses are being properly updated in E-TRAK to ensure the WO does not send whistleblower mail to outdated or incorrect addresses. This process could include developing a change of address form specific to whistleblowers and including a blank copy of it in every correspondence with whistleblowers or referencing the importance of updating the WO with any address change in every correspondence with whistleblowers. 8. Formally document a procedure for return address labels for mail originating from the WO that states that external envelopes should not identify the WO as the sender of the correspondence. We recommend the Commissioner of Internal Revenue direct the Deputy Commissioner for Services and Enforcement to take the following two actions: 1. Develop guidance for examiners in operating divisions to use in determining whether an Internal Revenue Code section 6103(n) contract with a whistleblower would be beneficial and outline the steps for requesting such a contract. 2. Strengthen guidance and procedures to ensure whistleblower information is retained only in the proper file locations. Such procedures could include requiring management sign off of taxpayer file reviews to ensure all whistleblower information has been appropriately segregated and sent back to the WO. To ensure timely and consistent information to Congress and the public, we recommend the Secretary of the Treasury issue its Whistleblower Office annual report to Congress no later than January 31st each year covering the prior fiscal year. We provided a draft of this report to the Commissioner of Internal Revenue and Secretary of the Treasury for comment. IRS provided technical comments that were incorporated, as appropriate. We received written comments from IRS’s Deputy Commissioner for Services and Enforcement, which are reprinted in appendix IV. Treasury did not provide comments. The Deputy Commissioner agreed with our recommendations and underscored the importance of the whistleblower program as part of IRS’s overall enforcement efforts and stated that IRS is committed to improving the whistleblower claim review process and implementing the recommendations in our report, as well as recommendations that are expected as part of an internal Lean Six Sigma review. IRS has already taken some actions to address the findings and recommendations in our report. For example, the Deputy Commissioner said that the initial review backlog of over 5,000 claims has been reduced to less than 500 as of September 2015, and that ARC has brought on six new employees to work on the award determination backlog. IRS is also addressing the denials backlog and looking at ways to streamline the claim review process to provide opportunities for efficiencies, including technology improvements. The Deputy Commissioner also acknowledged the constraints section 6103 disclosure rules place on the whistleblower program, but said IRS will be looking for ways to address communication concerns, including our recommendations related to the pilot annual status letter program and 6103(n) contracts. The Deputy Commissioner also stated that IRS will address the recommendations for the annual report by implementing meaningful changes to the content, format, and timing of the next annual report. Finally, the Deputy Commissioner stated IRS recognizes the importance of strong internal controls and of updating policies and procedures in a timely manner to ensure proper oversight of the program. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or at mctiguej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report (1) describes the steps, timeframes, and staffing levels in the whistleblower claims process, including the Whistleblower Office’s (WO) staffing strategy for improving efficiency, and assesses how whistleblower claims are prioritized within Internal Revenue Service’s (IRS) investigation, examination, and collections workloads; (2) describes the high-dollar 7623(b) whistleblower awards and assesses how the WO determines these awards; (3) evaluates the WO’s role in managing the whistleblower claims process and communicating with the whistleblower community; and (4) evaluates how the WO safeguards whistleblower identities and protects whistleblowers from retaliation. To describe the steps and timeframes in the whistleblower claims process, we reviewed IRS guidance, including IRS’s Internal Revenue Manual (IRM), on whistleblower claims processing and IRS management’s expectations for timeliness. To assess how often IRS met its timeliness goals, we reviewed IRS data on how long IRS took to process these steps. We also spoke with IRS officials in the WO and operating divisions (ODs) responsible for completing these steps. We, along with the Treasury Inspector General for Tax Administration (TIGTA), identified several weaknesses with the whistleblower data system, E-TRAK, but determined that the data we used were sufficiently reliable for the purposes of our review. Sensitive to weaknesses in E- TRAK, the WO included procedures to validate the data they compiled for us. The WO official said he checked whether data generated is consistent with his knowledge of the program and examined outliers for potential data entry errors. He also selected a sample of cases for individual file reviews. We also looked at the claims processing documentation in the whistleblower case files for 7623(b) awards to assess timeliness for those claims. For WO staffing, we reviewed the WO’s staffing strategy and proposals for additional staff. We also interviewed WO officials concerning implementation of the WO’s staffing strategy in light of the reduced fiscal year 2015 budget. To assess how whistleblower claims are prioritized within IRS’s workload, we reviewed IRS guidance, including the IRM, WO guidance to the ODs, as well as OD guidance to their respective staff. We interviewed officials from the four ODs that process whistleblower claims—Large Business and International (LB&I), Small Business / Self-Employed (SB/SE), Tax Exempt and Government Entities (TE/GE), and Criminal Investigation (CI). We compared the procedures used in an investigation, examination, or collections of cases with whistleblower claims to procedures used for cases without whistleblower claims. We found that, generally, the same procedures were used though whistleblower cases entail additional processes to determine the merit of the whistleblower information, to document contributions, and to ensure confidentiality. To describe the 7623(b) awards, we reviewed the whistleblower case files for all such high-dollar awards since the 7623(b) provision was implemented in December 2006 through the end of June 2015. Of the 17 awards, we reviewed the 11 case files located in Ogden, Utah. For the remaining 6 cases, we obtained copies of specific documents from the WO, including Form 211 Application for Award for Original Information, Form 11369 Confidential Evaluation Report on Claim for Award and other documents submitted by the ODs, internal award recommendation report, detailed award report, and any counsel memos. We also received awards and collected proceeds data for 7623(a) claims from the WO. WO officials reported they do not use E-TRAK to track collected proceeds and awards, but rather use a separate spreadsheet which is also used for information reporting of individual whistleblower income. The WO verified the collected proceeds from taxpayer accounts. To assess how the WO determines 7623(b) awards, we reviewed section 7623 of the Internal Revenue Code and implementing regulations, and the IRM section which specifies the process and criteria for determining whistleblower awards and interviewed WO officials. We also reviewed the claim files of all awards paid under section 7623(b) to assess how the WO determined awards, what criteria were used, whether the criteria were applied consistently, and whether awards were correctly calculated. To evaluate the role of the WO in monitoring whistleblower claims, we reviewed the IRM and WO guidance, and interviewed WO and OD officials responsible for whistleblower claims. We also looked at the communication between the WO and ODs as documented in the 7623(b) awards case files. To evaluate how the WO communicates with the whistleblower community, we reviewed relevant regulations covering confidentiality and disclosure of information issues. We also reviewed IRS’s internal and external communications plan, including WO’s two fact sheets for external communication, and interviewed WO staff involved with implementing the strategy. We also interviewed a non-generalizable sample of five whistleblowers and nine whistleblower attorneys for their perspectives on IRS communication with whistleblowers. We selected these whistleblower attorneys based on their participation in our prior report, their varied experiences with the WO, and on recommendations of others within the whistleblower community. We also spoke with whistleblowers who contacted us either on their own or through their attorney. Due to confidentiality concerns, we did not reach out directly to any whistleblowers. We used qualitative data analysis software to identify common themes and patterns in our interviews with whistleblowers and their attorneys. To evaluate how the WO safeguards whistleblower’s identities and protects whistleblowers from retaliation, we reviewed IRS guidance, including the IRM, on what steps the WO and ODs take to keep whistleblower identities confidential. We interviewed WO and OD officials about key controls for safeguarding information and the potential weaknesses of such controls. We reviewed Department of the Treasury’s legislative proposals on retaliation protections for tax whistleblowers. We also interviewed whistleblowers and whistleblower attorneys to discuss the usefulness and potential benefits of employer retaliation protections. We conducted this performance audit from October 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Whistleblower claims can be denied and closed throughout the claim review process. Table 5 summarizes where claims closure decisions occurred from fiscal year 2013 through August 5, 2015. Most claims that are closed are done so at the WO initial review stage. Small Business / Self-Employed (SB/SE) TEFRA related claims are those that involve partnerships as defined in the Tax Equity and Fiscal Responsibility Act of 1982 (TEFRA). PARL stands for the preliminary award recommendation letter. In addition to the contact named above, Libby Mixon (Assistant Director), Lisette Baylor, Amy Bowser, Brett Caloia, Bertha Dong, Mackenzie Doss, Robert Gebhart, Danielle N. Novak, Cynthia Saunders, Albert Sim, and James R. White made key contributions to this report.
Tax whistleblowers who report on the underpayment of taxes by others have helped IRS collect almost $2 billion in additional revenue since 2011, when the first high-dollar claim was paid under the expanded program that pays qualifying whistleblowers a minimum of 15 percent of the collected proceeds. These revenues help reduce the estimated $450 billion tax gap—the difference between taxes owed and those paid on time. GAO was asked to review several aspects of the whistleblower program. Among other things, this report (1) assesses the WO claim review process, (2) assesses how the WO determines awards, (3) evaluates how the WO communicates with external stakeholders, and (4) evaluates IRS's policies and procedures for protecting whistleblowers. GAO reviewed the files of all 17 awards paid under 26 U.S.C. § 7623(b) through June 30, 2015; reviewed IRS data; reviewed relevant laws and regulations, and the WO's policies, procedures and publications; and interviewed IRS officials, five whistleblowers that independently approached GAO, and nine whistleblower attorneys who were recommended by IRS or other attorneys. The Internal Revenue Service (IRS) Whistleblower Office (WO) is responsible for processing thousands of tax whistleblower claims annually for two related whistleblower programs: for claims of $2 million or less, the 7623(a) program, and for claims over $2 million, the 7623(b) program. The whistleblower claim review process takes several years to complete, and GAO found that the WO is not using available capabilities to track and monitor key dates in its claim management system. Without available information on key dates related to award review and payments, the WO is unable to assess its performance against timeliness targets and risks unnecessarily delaying award payments. Between fiscal year 2011 and June 30, 2015, the WO awarded over $315 million to whistleblowers—the bulk of which was for the 7623(b) claims, which were first paid in fiscal year 2011, 4 years after the program started. In a review of the 17 paid 7623(b) award claim files, GAO found that the WO made errors in determining some awards, resulting in over- and underpayments totaling approximately $100,000. In response to errors, IRS began corrective actions, including ensuring total collected proceeds are verified before making award payments. However, the WO has not documented this new procedure, putting it at risk of making additional errors in award payments. The WO's communication with stakeholders, including whistleblowers, is limited due to delayed annual reports to Congress, incomplete data, and limited program information for whistleblowers. Delays in issuing the annual reports have resulted in last minute revisions that introduced discrepancies and inconsistent reporting periods that preclude year-over-year comparisons. The WO is addressing some data gaps and has published two fact sheets to provide more information to the whistleblower community; however, the fact sheets do not include information on key aspects of the program, such as time ranges for steps in the review process. Until changes are made to the annual report and fact sheets, the utility of these publications is limited. IRS and the WO take steps to protect whistleblowers and the information they submit, but GAO found gaps in IRS and WO procedures. For example, the WO did not have documented controls in place for sending mail, and at least once sent sensitive mail to an incorrect address that also had a return address indicating the letter was from the WO. This potentially compromised the identities of whistleblowers. The WO has said it has since changed how they label return addresses, but has not documented this policy. Further, tax whistleblowers do not have statutory protections against retaliation from employers. IRS and the whistleblower community support such protections, noting that inadequate protections may discourage whistleblowers from coming forward. Congress should consider providing whistleblowers with legal protections against retaliation from employers. GAO makes ten recommendations to IRS including, tracking dates, strengthening and documenting procedures for award payments and whistleblower protections, and improving external communications. IRS agreed with our recommendations.
Mutual funds are structured so that each investor in the fund owns shares, which represent a percentage of the fund’s investment portfolio, and investors share in the fund’s gains, losses, and its costs. Mutual fund families offer investors multiple funds from which to choose, each with its own level of risk and investment objective, such as international equities or U.S. government bonds. Investors may usually exchange assets between funds within a fund family at any time. Recent investigations of mutual fund trading by the SEC and some state attorneys general have revealed cases of abusive trading practices. Mutual funds have proven to be a vehicle for abusive trading for a few reasons, such as Inefficient pricing of certain funds. Mutual funds typically determine their net asset values once a day, based on the prices of their underlying securities at 4:00 p.m. eastern time. For funds invested in equities that trade on international stock exchanges, the most current prices for those underlying assets may be as much as 15 hours old and thus not reflect more recent information that may affect the prices of those assets. When the prices of underlying securities do not reflect the most current information that is likely to affect their price, opportunities are created for arbitrage, or profitably exploiting price differences of identical or similar financial instruments, usually over a short time period. Free fund exchanges. Abusive market timing sometimes took place because investors took advantage of the fact that fund families often allow their fund shareholders to purchase, redeem, or exchange funds at no cost for a specific transaction. Normally, investors may redeem their shares on any business day. Difficulty of identifying trading abuses. In many cases trading abuses were committed by investors who purchased and redeemed fund shares through intermediaries, who are not required to share information about their clients’ transactions with mutual fund companies. Most funds are sold via intermediaries such as broker-dealers, banks, and pension plans. To simplify and reduce the costs of mutual fund transactions, intermediaries collect orders throughout the day and then aggregate all the transactions they receive for a particular fund. Those intermediaries that are licensed as broker-dealers may net, or match, purchase and redemption orders for the same funds among their own clients. In a simplified example, if one investor were to purchase 15 shares of fund A, and another investor were to redeem 10 shares of fund A, at the end of the day the intermediary would simply transmit one order to purchase 5 shares of fund A—the net result of the day’s orders. Intermediaries then transmit the net results of aggregate transactions to the mutual fund companies, where intermediaries hold omnibus accounts representing the collective shares of their clients. Mutual fund companies generally do not have information about the identities and specific transactions of the individual investors in intermediaries’ omnibus accounts. Intermediaries have contact with their clients, such as defined contribution plan participants and other individual investors (“retail investors”), and control access to information about their trading activity. Because intermediaries do not typically share this information with mutual funds, the fund companies often cannot discern whether these investors are frequently trading in and out of their funds. Mutual fund intermediaries accept purchase and redemption orders throughout the day and are required to stop accepting trades at 4:00 p.m. eastern time for those transactions that will receive the same day’s net asset value. According to SEC rule 22c-1 under the Investment Company Act of 1940, purchase and redemption orders submitted by investors to a fund or fund intermediary before the fund next determines its net asset value (usually at 4:00 p.m.) must be executed at that next-computed net asset value. Presently, intermediaries are allowed to aggregate orders after 4:00 p.m. and submit them as omnibus account transactions later in the evening for settlement to mutual fund companies, either directly or via their transfer agents or the National Securities Clearing Corporation (NSCC), an SEC-registered clearing agency. An intermediary or mutual fund that allows investors to engage in late trading could therefore aggregate orders received both before and after 4:00 p.m. and process them as if they had all arrived before 4:00 p.m. Figure 1 illustrates the process of how orders for mutual fund transactions are transmitted from investors and plan participants to mutual fund companies. Most employers that sponsor defined contribution plans contract out the various administrative tasks of plan record keeping to companies that have expertise in the administration of plans or investments. Pension plan record keepers keep track of day-to-day transactions for each plan participant’s account. The record keeper is responsible for transactions such as crediting accounts with employee and employer contributions, processing changes in participant-directed investment allocations, updating account values (usually each business day) to reflect changes in the values of mutual fund shares held by each plan participant, and acting as a mutual fund intermediary when participants make exchanges between funds. When a plan participant sends the record keeper a request for a transaction, such as for a loan, the record keeper must determine whether the request can be approved in accordance with federal tax and pension laws and the rules of the company’s pension plan. In addition, record keepers may function as the primary source of plan information and customer service for plan participants. Pension plan sponsors often hire a mutual fund company or a plan record keeper to administer their defined contribution plans. Plans administered by a record keeper frequently offer an “open-architecture plan” that permit participants to invest in mutual funds offered by a variety of mutual fund companies. The record keeper itself may be one of these companies, insofar as some companies that are primarily record keepers also offer their own proprietary mutual funds. Plans administered by a mutual fund provider will typically include investment choices offered by that mutual fund provider, and may or may not offer funds of other mutual fund companies. In recent years, open-architecture plans have become more common among defined contribution plans. Mutual funds are subject to SEC registration and regulation, and are subject to numerous requirements established for the protection of investors. Mutual funds are regulated primarily under the Investment Company Act of 1940 and the rules and registration forms adopted under that act. The 1940 act grants SEC broad discretionary powers to keep the act current with the constantly changing financial services industry environment in which mutual funds and other investment companies operate. The primary mission of the SEC is to protect investors, including pension plan participants investing in securities markets, and maintain the integrity of the securities markets through extensive disclosure, enforcement, and education. In addition to regulating mutual funds, SEC also regulates some of the intermediaries that act as brokers of mutual funds, such as retail broker-dealers and certain pension plan record keepers. However, fund intermediaries that are not registered as broker- dealers are outside SEC’s jurisdiction. For example, insurance companies are regulated by state authorities, banks are regulated by the Office of the Comptroller of the Currency (OCC) and other bank regulators, and pension plan administrators are regulated by DOL. These regulators are required to perform a number of oversight functions—for example, OCC examines the safety and soundness of certain types of banks—therefore, identifying infractions of SEC trading regulations is not the focus of their regulatory activity. Pursuant to the Employee Retirement Income Security Act of 1974 (ERISA), DOL enforces reporting and disclosure provisions and fiduciary responsibility standards of private employer-sponsored pension plans. While ERISA does not provide specific guidance regarding the steps a plan fiduciary may or should take with regard to late trading and market timing, ERISA established the broad fiduciary requirements relating to private pension plans and was designed to protect the rights of plan participants and their beneficiaries. ERISA Section 401(b)(1) of Title I provides that a plan which invests in a security issued by an investment company registered under the Investment Company Act of 1940, such as mutual fund shares, is only investing in the “security” or shares of that investment company and not in the underlying assets of the investment company. The asset of the plan is the issued security, not any of the assets held by the investment company. Therefore, under ERISA, DOL does not regulate the activities of an investment company. The cost to long-term mutual fund investors of late trading and market timing is unclear, however it does not appear that these trading abuses affected pension plan participants differently than other long-term investors. While costs of individual instances of late trading and market timing may not have a noticeable effect on the value of fund shares held by long-term investors, the cumulative effect of abusive trading may have been significant. Studies of late trading and market timing have yielded varying estimates of their cost to long-term fund investors. The extent of abusive trading appears to have varied among funds, in part because some funds went to greater lengths than others to try to prevent trading abuses. Ultimately, the effect of late trading and market timing on the savings of retirement plan participants and other long-term fund shareholders is a function of which funds they invested in and for how long. When some investors are allowed to frequently buy into a fund to benefit from its short-term increases in value and sell shares to avoid its decreases in value, there is a three-fold negative impact on the fund’s long-term shareholders: Costs increase. Abusive trading generates greater transaction costs because fund managers have to more frequently buy or sell shares of the underlying securities in the fund’s portfolio to match demand for fund shares. Investment returns usually decline over time. Abusive trading usually results in lower investment returns over the long term when fund managers hold a greater percentage of the fund’s assets in cash. Fund managers often increase the percentage of fund assets held in cash in order to accommodate short-term traders’ redemptions of shares without having to engage in cost-generating transactions of buying and selling shares of the fund’s underlying securities. Over the long term, investments in cash have yielded lower investment returns than stocks and bonds. Gains are diluted. If short-term traders purchase fund shares and redeem them before their money can be invested in the fund’s portfolio, they share in increases in the fund’s value, resulting in long-term shareholders receiving a smaller share of these gains—a dilution of fund gains. Conversely, short-term traders can often avoid losses by redeeming fund shares before their value decreases, resulting in long-term investors sharing in a higher proportion of the fund’s decrease in value. Figure 2 demonstrates the dilution effect of abusive short-term trading on long-term shareholders. While a short-term trader can earn large returns from late trading or market timing, the costs of such trades are generally imposed on a large population of shareholders and therefore have a relatively small effect on each individual investor. As shown in the example in figure 2, market timing reduces the net asset value of a share from $10.90 to $10.89, or less than 0.1 percent. However, abusive short-term trading on a large scale and over a period of years could cost long-term shareholders, such as plan participants, more significant percentages of their assets. Efforts to quantify the total extent and cost of late trading and market timing have yielded varying results. One academic study found evidence of late trading in 15 of a sample of 50 international funds, and in 12 of a sample of 96 domestic equity funds between 1998 and 2001. On the basis of these samples, the study estimates that during 2001, late trading diluted the gains of the average long-term shareholder in international and domestic equity funds by 0.05 and 0.006 percent, respectively. We were unable to identify other studies on the extent of late trading, though representatives of a mutual fund trade association that we spoke with believe that these estimates are too high. Market timing also appears to have been most prevalent in international equity funds, according to both academic studies and representatives of mutual fund companies we spoke with. Studies show that the most profitable market timing strategies involved trading in and out of international equity funds. Other funds that were used for market timing were small and midsize company domestic equity funds and some types of bond funds. According to one study, market timing has more negatively affected long-term shareholders than late trading. Among the seven studies about market timing we reviewed, estimates of its cost ranged from averages of 0.32 to 2.3 percent of assets per year in international equity funds. The differences in the estimated costs of market timing vary depending on which data and methodology were used by the researchers. These variations also indicate the difficulty of definitively calculating the extent of mutual fund trading abuses and their effect on long-term investors. The extent of late trading and market timing is very difficult to measure because these practices can be hard to identify. Many cases of late trading occurred at the fund intermediary level, when orders were illegally accepted after 4:00 p.m. and given the same day’s price when they were combined with orders accepted before 4:00 p.m. Among 34 brokerage firms surveyed by SEC, including some of the largest in the nation, more than 25 percent reported instances of illegal late trading at their firms. However, one SEC official told us that SEC views these survey results as conservative estimates of the extent of late trading, particularly because there are numerous intermediaries that sell mutual funds, including a significant percentage that are not registered with and regulated by SEC. In one case of late trading, SEC brought charges against Security Trust Corporation, a national bank association, for allowing Canary Capital Partners, a hedge fund, to submit trades after the close of the market and receive same day pricing. Security Trust then aggregated these illegal transactions with legitimate retirement plan transactions and submitted orders after 4:00 p.m. that appeared to be legal to fund companies. Security Trust Corporation has been closed by federal regulators. According to SEC officials, audits of past transactions cannot identify many instances of late trading because late traders often submitted orders before 4:00 p.m. and then were allowed to cancel those orders after the market closed. Canceled orders were then destroyed, which left no record of the illegal trading. Market timing can also be difficult to identify because, among other reasons, the omnibus accounts of intermediaries obscure individual account transactions. Therefore, mutual fund companies cannot identify the frequency at which an individual investor is exchanging money between funds. SEC has alleged that one intermediary’s methods included (1) forming and registering two affiliated broker-dealers through which the intermediary could continue to engage in market timing without detection, (2) changing account numbers for blocked customer accounts, (3) using alternative registered representative numbers for registered representatives who were blocked from trading by mutual funds, (4) using different branch identification numbers, (5) switching clearing firms, and (6) suggesting that customers use third-party tax identification numbers or Social Security numbers to disguise their identities. Retirement plan participants would have been affected by late trading and market timing just like other long-term investors if they were shareholders in funds where these trading abuses occurred. Since trading abuses appear to have been concentrated in international equity funds, those plan participants that invested in such funds would likely have been affected by late trading and market timing. However, even among investors in international equity funds, some were probably affected more than others because some mutual funds have successfully reduced market timing by employing various tools such as fair value pricing, redemption fees, and other penalties against frequent traders. According to news reports and SEC officials, some plan sponsors have responded to mutual fund trading abuses by reassessing the investment options they offer to their plan participants, and in some cases have removed implicated funds from their offerings. Nonetheless, some funds that tried to stop market timing could still have been used by abusive short-term traders who traded via intermediaries. Most of the assets of plan participants were not affected by market timing in international equity funds because, as shown in figure 3, less than 10 percent of all plan assets were invested in international equity funds. According to a study by the Investment Company Institute, international equity funds make up less than 10 percent of total defined contribution assets in mutual funds. However, according to two of the nation’s largest pension plan record keepers, at least 19 percent of plan participants, for whom they keep records, invest at least part of their retirement savings in international equity funds. Furthermore, any individual investor may allocate his or her plan assets very differently from the average. Market timing can also harm plan participants if a plan sponsor fails or refuses to limit a participant’s market timing. In pension plans, even where a fund company becomes aware of a participant that is engaged in harmful market timing, the fund’s ability to restrict only the participant, and not the entire plan, may be limited because the shares of all participants are held in the record keeper’s omnibus account. If a plan sponsor fails or refuses to act to stop a participant engaged in market timing, a fund has few means with which to stop the market timer, except for perhaps restricting access to the fund for all the plan’s participants. According to representatives of one mutual fund trade association we spoke with, plan sponsors have sometimes been reluctant to impose redemption fees or trading restrictions on plan participants for fear that they may be sued for fiduciary violations. SEC and DOL have each taken steps to address abusive trading in mutual funds, and SEC has proposed regulations that aim to eradicate late trading and curb market timing. SEC has been investigating and has settled several cases of abusive trading in mutual funds and has recently adopted new mutual fund disclosure requirements. DOL, meanwhile, is conducting its own investigations and has issued guidance to pension plan sponsors that covers, among other things, their responsibilities to ensure that they are offering prudent investment options to plan participants. SEC’s proposed regulations on late trading would amend the rule that governs how mutual funds price and receive orders for share purchases and redemptions. To try to curb market timing, a separate SEC proposal would require mutual funds to impose a 2-percent redemption fee on the proceeds of shares redeemed within 5 business days of purchase. SEC has already settled some cases of late trading and market timing abuses with mutual fund companies, hedge funds, and brokers. Though market timing is not illegal, SEC has charged fund companies with defrauding investors by not enforcing their stated policies of discouraging or prohibiting market timing, as written in their prospectuses. Some institutions have been fined hundreds of millions of dollars, and part of this money will be returned to long-term fund shareholders who lost money from these abusive trading practices. Furthermore, SEC has permanently barred some of the individuals at these companies from future work with investment companies and is seeking disgorgement and civil penalties against them. SEC officials told us that more enforcement actions are pending. In addition to its enforcement actions, SEC has issued guidance and new regulations that address the negative impact of market timing on long-term shareholders. In 2002, SEC issued guidance stating that mutual funds may delay exchanges of shares from one fund to another in order to combat market timing. Permitting delayed exchanges could deter market timing, since market timers seek to effect transactions on a specific day to take advantage of perceived market conditions. SEC also issued new regulations in April 2004 that require mutual funds to disclose the following information in their prospectuses: risks to shareholders of frequent purchases and redemptions of shares, policies and procedures regarding frequent purchases and redemptions of circumstances under which they will use fair value pricing and the effects of using fair value pricing, and policies and procedures with respect to the disclosure of their portfolio securities and any ongoing arrangements to make available information about their portfolio securities. Mutual funds must comply with these new regulations by December 5, 2004. Separate from SEC’s activities, DOL has also begun investigating possible fiduciary violations at some large investment companies, including those that sponsor mutual funds, intermediaries, and plan fiduciaries. More specifically, DOL is determining whether any of ERISA’s fiduciary provisions were violated by offering investments in funds that allowed late trading or market timing, and whether employee benefit plans incurred any financial losses as a result. Among other things, DOL expects to address whether plan fiduciaries used pension plan accounts to facilitate late trading or market timing of others, whether pension plans incurred losses as a result of fiduciaries knowingly directing investments in mutual funds that permitted late trading or market timing, and whether plan fiduciaries appropriately monitored plan provisions regarding market timing. DOL also issued a statement in February 2004 suggesting that plan fiduciaries review their relationships with mutual funds and other investment companies to ensure that they are meeting their responsibilities of acting reasonably, prudently, and solely in the interest of plan participants. According to DOL, for those mutual funds under investigation for trading abuses, fiduciaries should consider the nature of the alleged abuses, the potential economic impact of those abuses on the plan’s investments, the steps taken by the fund to limit the potential for such abuses in the future, and any remedial action taken or contemplated to make investors whole. For funds that are not under investigation, DOL suggested that fiduciaries review whether funds have procedures and safeguards in place to limit their vulnerability to trading abuses. The DOL guidance also explains that if a plan offers mutual funds or similar investments that impose reasonable redemption fees on sales of their shares, this would, in and of itself, not affect the availability of relief to the plan sponsor under Section 404(c) of ERISA. The guidance adds that reasonable plan or investment fund limits on the number of times a participant can move in and out of a particular investment within a particular period would not run afoul of requirements under 404(c). However, the terms and conditions of the plan regarding the imposition of fees and trading restrictions must be clearly disclosed to the plan’s participants and beneficiaries. Representatives of mutual fund companies and plan sponsors have told us that additional guidance on what actions plan sponsors may take to prevent market timing by plan participants, without losing relief under ERISA Section 404(c), would be helpful. In addition to adopting new mutual fund disclosure requirements, SEC has also proposed regulations to address late trading and market timing abuses. In December 2003, SEC proposed amending the rule that governs how mutual funds price and receive orders for share purchases or sales. Since many of the cases of late trading involved orders submitted through intermediaries, including banks and pension plans not regulated by SEC, the proposed amendments would require that orders to purchase or redeem mutual fund shares be received by a fund, its transfer agent, or a registered clearing agency before the time of pricing (usually 4:00 p.m. eastern time). SEC officials explained to us that given their resources, they cannot examine all intermediaries that accept order information for mutual fund shares. Thus, to lower the risk of additional late trading abuses, it would be necessary to reduce the number of fund intermediaries with the authority to verify the time that orders are received. To stem market timing, SEC proposed a new rule in March 2004 to require mutual funds to impose a 2-percent redemption fee on the proceeds of shares redeemed within 5 business days of purchase. According to the proposal, the proceeds from the redemption fees would be retained by the fund and would become a part of the total assets managed on the behalf of the fund’s shareholders. The imposition of a mandatory redemption fee is intended to serve two purposes: (1) to reimburse a fund for the approximate costs of short-term trading in fund shares, and (2) to discourage short-term trading by reducing its profitability. SEC is aware that the redemption fee by itself is inadequate for eliminating all profitable market-timing opportunities. Therefore, fund companies may use additional measures to try to prevent market timing. In addition, the proposal requires all fund intermediaries, including plan record keepers, to share the details of each client’s transactions with mutual fund companies. On at least a weekly basis, intermediaries would be required to provide mutual funds with purchase and redemption information for each shareholder within an omnibus account to enable the fund to detect market timers and ensure that redemption fees are properly assessed. Presently, those intermediaries that are not under the jurisdiction of SEC cannot be required by SEC to share individual account information with mutual fund companies. The proposal also allows for certain exceptions to the application of the redemption fee, such as for unanticipated financial emergencies, and for redemptions of $2,500 or less if the fund chooses to adopt such a policy. These proposals are part of an open regulatory process, and according to SEC officials, SEC staff have reviewed over 1,400 comment letters and met with various interested parties. SEC officials are considering modifications to the proposals based on feedback from different parties and will ultimately recommend a final set of proposals to the Commissioners of the SEC. SEC also proposed new regulations that address mutual fund boards’ independence and effectiveness, fund adviser compensation of broker- dealers that sell fund shares, and mutual fund ethics standards. SEC officials told us that these rules and others should help reduce abusive practices, such as late trading and market timing, throughout the mutual fund industry. DOL is not involved in the process of drafting the proposed late-trading and market-timing regulations because it does not regulate mutual funds. However, it is considering how the regulations would affect pension plans and anticipates providing interpretative assistance to plan sponsors and record keepers, as necessary, regarding any ERISA issues in implementing SEC’s final rules. SEC’s proposed regulations on late trading and market timing would have similar effects on pension plan participants and other investors, but as they were initially written they would also have some effects unique to defined contribution plan participants. To the extent that the proposals would result in a cessation of late trading and a reduction in market timing, plan participants, like other mutual fund investors, would benefit. However, SEC’s proposed regulations are expected to create additional costs for mutual fund companies and fund intermediaries, including plan record keepers; many of these costs are likely to be passed on to investors, plan participants, and plan sponsors. Plan participants could be distinctly affected by the late trading proposal because it creates potential complications for the processing of certain transactions unique to defined contribution plans, such as loans. In addition, plan participants may pay fees intended to deter short-term trading, including market timing, even on certain transactions where there is clearly no intent to engage in abusive trading. To the extent that SEC proposals would result in a cessation of late trading and a reduction in market timing, plan participants, like other mutual fund investors, would benefit. SEC officials told us that the Hard 4 proposal would virtually eliminate the possibility of late trading through mutual fund intermediaries. Participants could also benefit from the redemption fee proposal, as many short-term traders are likely to be deterred from abusive market timing that imposes costs on long-term investors. Furthermore, those who engage in market timing would repay to long-term shareholders at least part of the costs that they impose on them. According to SEC officials, pension plan participants and other fund investors would also benefit from increased confidence in the fairness of the securities markets, knowing that these two types of abusive trading practices were being minimized. Market fairness and the promotion of investor confidence have long been goals of the SEC. The persistence of late trading and market timing could undermine the integrity of, and investor confidence in, the securities markets in general and mutual funds in particular. SEC officials told us that not acting quickly to address these abuses could have resulted in investors withdrawing mutual fund investments and either looking for other investment options or withdrawing from securities markets entirely. SEC’s proposed regulations on late trading and market timing are expected to create additional costs for mutual funds and fund intermediaries, including pension plan record keepers, which would likely result in increased costs for all mutual fund investors, plan participants, and plan sponsors. SEC’s late trading proposal could force intermediaries to require their clients, including pension plan participants, to submit their orders for mutual fund transactions prior to 4:00 p.m. eastern time. Pension plan administrators anticipate that retirement plan participants who submit orders through intermediaries would face cutoffs between 12:00 p.m. and 2:00 p.m. eastern time in order to allow pension plan record keepers time to process purchase and redemption orders before submitting them to the fund, its transfer agent, or NSCC. This earlier deadline for submitting fund transaction orders to plan record keepers should not significantly affect payroll transactions of fund shares because these transactions are a function of the participant’s payroll schedule and not usually timed investment decisions made by the plan participant; therefore, the change in price from one day to the next could either be to the benefit or the detriment of plan participants as they purchase or redeem shares at higher or lower prices. According to representatives of two large mutual fund companies that we spoke with, payroll transactions represent about 95 percent of the defined contribution plan transactions that they process. However, some pension plan administrators told us that in some cases of nonpayroll transactions, they may not be able to process any purchase and redemption requests the same day that orders are received. SEC officials told us that implementation of computer system upgrades and modifications to business processes would likely result in intermediaries ultimately being able to accept orders until a time very shortly before 4:00 p.m. eastern time. However, some intermediaries told us that system upgrades and the communication of information to investors, plan participants, and plan sponsors about new requirements for submitting orders for mutual fund transactions could represent a significant expense. Some pension plan record keepers told us that adoption of the Hard 4 proposal would put intermediaries at a competitive disadvantage if they were unable to modify their systems so that plan participants would be able to submit orders until 4:00 p.m. (or just before then). They argued that investors, including plan participants, have grown accustomed to ever-increasing rates of change in global financial markets and that plan participants want the flexibility to move their money at a moment’s notice, without having to wait a day for the transaction to be completed. Indeed, on some of the stock market’s most volatile days there have been increases in the percentage of plan participants who exchange money between funds. As a result of this demand, plan record keepers fear that they would not be able to compete with mutual fund companies, who offer their own funds and record-keeping services to pension plans and could therefore allow plan participants to submit orders until 4:00 p.m. Officials of one mutual fund company that also serves as a record keeper expressed concerns that plan participants may demand alternative investment products to mutual funds if they were to no longer be able to place orders for fund transactions until the market closing time. However, according to information from two of the nation’s largest mutual fund companies, the vast majority of plan participants do not make more than one exchange between mutual funds during the course of a year. The redemption fee proposal would also create new costs for mutual funds and their intermediaries. SEC has noted that the costs to a fund’s transfer agent to store the shareholder information and track the trading activity may be significant and those costs may ultimately be passed on to investors. In some cases, the transfer agent would have to upgrade its record-keeping systems. Commenting on the information-sharing requirement in the proposed redemption fee rule, some plan record keepers that we spoke with explained that it would be inefficient to have transaction information of individual investors stored by both plan record keepers and fund transfer agents. Representatives of one mutual fund company told us that record keeping would be most efficient if intermediaries were only required to share transaction information about individual investors upon the request of mutual funds. The redemption fee proposal would also increase costs for fund intermediaries who would have to upgrade any systems that are currently unable to either transmit individual shareholder data to mutual fund companies or track transaction patterns of individual accountholders. Many intermediaries have stated that the costs of these technology upgrades would be substantial and would likely be passed on to mutual fund shareholders who invest through intermediaries, including pension plan participants. However, estimates of these costs depend to some extent on the flexibility of systems that intermediaries currently employ. Some fund intermediaries have argued that SEC should establish a uniform schedule for redemption fees in order to keep the cost of tracking the transactions of individual investors and assessing redemption fees to a minimum. Mutual fund company representatives, however, have told us that because funds vary in characteristics such as investment objective and investor turnover, funds have different needs for cost recovery and market timing deterrence. For example, an international fund might need higher redemption fee amounts and longer holding periods to discourage market timing. Therefore, they say, mutual fund directors should have the flexibility to set redemption fee terms that they feel would best achieve these goals and protect long-term investors. Pension plan record keepers note that SEC’s Hard 4 proposal would present complications for the processing of certain transactions that are unique to pension plans, such as participant loans, which are held by about 20 percent of 401(k) plan participants, according to three large plan record keepers. Record keepers told us that to process a loan request, a plan record keeper must know the value of the mutual fund shares held by the plan participant to determine how many shares must be redeemed, and from which funds, to meet the participant’s request and comply with various rules governing loan transactions. Currently plan record keepers process loan transactions after the net asset values of mutual fund shares have been calculated, which is after 4:00 p.m., and then submit a redemption order for a specified number of dollars or fund shares or a percentage of the participant’s total plan assets. Under the Hard 4 proposal, record keepers would have to transmit redemption orders for loan transactions before they could know the net asset value of a participant’s shares in different funds; therefore, according to record keepers, they would likely use the prior day’s share prices to estimate either the number of shares to be redeemed or the amount of money to be withdrawn from each fund owned by the participant. Because mutual fund share prices usually change from one day to the next, the submission of a redemption order could result in either the participant receiving more or less money than requested or a violation of plan rules that specify the order in which shares may be redeemed. For example, many plans require participants to first redeem those mutual fund shares that were purchased with their own contributions before redeeming shares that were purchased with employer contributions. Figures 4 and 5 demonstrate the potential problems that may arise with loan transactions were the Hard 4 regulations to be adopted as originally proposed. Plan rule violated (not all shares in fund were sold) Participant receives less than requested (not enough shares available because of drop in price) 500.000 (2,500) Estimated shares to be sold (Total shares held) Estimated total to be withdrawn 800.000 (800) (+.25) Outcome with overall dollar increase 700.000 (700) (-.25) 500.000 (2,500) (+.25) 800.000 (800) (-.25) Outcome with overall dollar decrease 700.000 (700) (+.25) 500.000 (2,500) (-.25) Despite SEC’s proposed measures to limit the application of the redemption fee, SEC’s redemption fee proposal may in certain circumstances penalize plan participants for certain transactions that could not be construed as attempts to engage in market timing. Plan participants do not control the timing of payroll transactions of fund shares, since plan sponsors and record keepers process these transactions. The transaction of purchasing fund shares in a participant’s plan does not necessarily occur on the same day that an employee receives a payroll deposit in the bank, and therefore plan participants may not know when additional fund shares are purchased on their behalf. Occasionally plan participants rebalance the allocation of their plan assets among their different mutual funds, transfer retirement savings from one fund to another, or take a loan from their plan. In some cases, these participant-directed transactions may occur within 5 days of a payroll purchase of fund shares, and in some of these cases the plan participant would pay a redemption fee of 2 percent on the most recent payroll purchase of fund shares, despite the fact that there was no intent to engage in abusive market timing. The SEC’s proposed rule has attempted to address these situations by limiting the application of the redemption fee by (1) mandating a “first-in, first-out” method for determining redemption fees, (2) allowing funds to not collect redemption fees on proceeds of $2,500 or less (de minimis exception), and (3) limiting the rule’s holding period to 5 days, thereby targeting the most egregious circumstances of excessive trading. Nonetheless, some funds may choose not to apply the de minimis exception; therefore, in some cases, participants could still end up paying redemption fees. Usually, a 2-percent redemption fee on the last payroll purchase of fund shares would not amount to more than a few dollars. However, plan sponsors and administrators have argued that it would be unfair to penalize plan participants when there is clearly no intent to engage in abusive trading. Figure 6 illustrates how a plan participant could be assessed a redemption fee for transferring the balance of one fund to another. plan participants could face complications with certain transactions that are unique to pension plans and be assessed fees when they would clearly not be engaging in abusive trading. Given the significant role that mutual funds play in retirement savings, we are recommending that the SEC Commissioners adopt certain modifications or alternatives to the proposed regulations that are currently under consideration in order to prevent defined contribution plan participants from being more adversely affected than other investors. We provided a draft of this report to SEC and DOL. We obtained written comments from SEC, which are reproduced in appendix III. SEC agreed with our analysis and noted that the commission staff is considering modifications to the proposals that should mitigate certain circumstances that could adversely affect pension plan participants. SEC and DOL also provided technical comments, which we incorporated as appropriate. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Commissioner of the SEC, the Secretary of Labor, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov/. If you have any questions concerning this report, please contact me at (202) 512-7215 or George Scott at (202) 512-5932. Other major contributors include Gwen Adelekun, Amy Buck, David Eisenstadt, Lawrance Evans, Jr., Cody Goebel, Marc Molino, Derald Seid, and Roger Thomas. Plan record keepers include broker-dealers and insurance companies. To explain the regulatory actions taken by SEC and DOL to address late trading and market timing, we interviewed SEC and DOL officials and reviewed documents from both agencies. To describe SEC’s enforcement actions, we reviewed congressional testimony by SEC’s Director of Enforcement and press releases from SEC and the New York State Attorney General’s office and interviewed SEC officials. To describe new regulations either adopted or proposed by SEC, we reviewed the regulations and spoke with officials from SEC’s Investment Management Division who have been involved in writing these regulations. To describe DOL’s enforcement actions, we reviewed documents sent to us by DOL officials and interviewed these officials. To explain DOL’s guidance to plan sponsors on the duties of plan fiduciaries in light of mutual fund trading abuses, we reviewed the guidance issued by DOL and interviewed DOL officials. We also spoke with representatives of plan sponsors, plan record keepers, and mutual fund companies to obtain their opinions about DOL’s guidance. To determine how defined contribution plan participants and pension plan service providers might be affected by SEC’s rule proposals on late trading and redemption fees, we reviewed numerous comment letters submitted to the SEC. In addition, we interviewed representatives of mutual fund companies, pension plan record keepers, officials from the National Securities Clearing Corporation (NSCC), trade associations that represent mutual funds, plan sponsors, pension actuaries and life insurance companies, and officials from SEC and DOL. To assess how plan participants could be affected by an earlier deadline for the submission of mutual fund transactions, we reviewed information from plan record keepers and mutual fund companies about the types of mutual fund transactions that plan participants normally make during the course of a year. In addition, we obtained information about the mutual fund trading activity of plan participants in response to major events that resulted in significant increases or decreases in the values of major stock indexes. We conducted our work between March 2004 and June 2004 in accordance with generally accepted government auditing standards. While many mutual fund companies and intermediaries support SEC’s goal of preventing unlawful trading in mutual fund shares, they have raised concerns about the Hard 4 proposal as a solution to illegal late trading and have suggested alternative solutions. These concerns center on the question of which entity or entities should be allowed to accept orders until the market closing time of 4:00 p.m. eastern time to receive the current day’s fund price. One alternative solution, the “Smart 4” proposal, seeks to maintain the flexibility intermediaries currently enjoy of accepting fund orders until the market close and then processing and transmitting them sometime after the market close. A second alternative, the “Clearinghouse” proposal, would require all mutual fund orders to receive an electronic time stamp at a central location that would verify their time of receipt. All orders received at the central clearinghouse by 4:00 p.m. would receive same day pricing. The Smart 4 proposal would require all companies that want to accept orders until the market close, and process them thereafter, to adopt a three-part series of controls: (1) electronic time stamping of all transactions so all trades could be tracked from the initial customer to the mutual fund company, (2) annual certifications by senior executives that their companies have procedures to prevent or detect unlawful late trading and that those procedures are working as designed, and (3) annual independent audits. The Smart 4 proposal has been advocated by most of the fund intermediaries that we spoke with. Representatives of intermediaries told us that they should be given an opportunity to prove that they can comply with the same policies and procedures as mutual fund companies in accepting and processing fund orders. Furthermore, many intermediaries assert that while SEC’s Hard 4 proposal addresses intermediary processing of mutual fund orders, it does not go as far in seeking to prevent late trading at mutual fund companies. Currently, not all intermediaries are subject to SEC jurisdiction; therefore, under the Smart 4 proposal, any unregistered intermediary that forwards mutual fund orders to a fund company after the market close would have to consent to SEC inspection authority. However, SEC officials told us that they do not have the resources to examine the numerous unregulated intermediaries they would have to inspect to ascertain that adequate internal controls are in place to prevent late trading. To date, the Smart 4 proposal has been revised a few times, and representatives of retirement plan intermediaries told us that they are working on developing a more robust network of controls that would allow independent auditors to verify that intermediaries are complying with the laws that prohibit late trading. The Clearinghouse proposal would require all mutual fund orders to be time-stamped electronically by an SEC-registered central clearing entity before the market close to receive that day’s fund price. The clearing entity’s time stamp would be considered the official time of receipt of an order for a mutual fund transaction. The National Securities Clearing Corporation is currently the only SEC-registered clearing agency operating an automated processing system for mutual fund orders. The Clearinghouse proposal would expand the NSCC’s role, capabilities, and capacity to handle all orders of mutual fund transactions. Each mutual fund company and fund intermediary would consider its technological capabilities and other factors in deciding how to meet the requirement of submitting orders to the NSCC by 4:00 p.m. in order to receive same-day pricing. By requiring that all mutual fund transactions be processed through the NSCC, the Clearinghouse proposal seeks to ensure that companies that offer their own mutual funds do not gain an advantage over intermediaries that do not. By allowing record keepers to submit order information to the NSCC in two phases, the Clearinghouse proposal, like the Smart 4, would preserve the processing of fund transactions after the market close. First, before the market close, mutual funds and fund intermediaries would submit a fund order that must contain the information essential to establishing the customer’s intent. Some orders would require additional information not essential to establishing intent. Under the Clearinghouse proposal, the additional information could be submitted after 4:00 p.m. as long as the submission establishing intent is received by the NSCC before 4:00 p.m. One major concern surrounding the Clearinghouse proposal is that intermediaries who do not currently use the NSCC’s clearinghouse system may face significant costs in upgrading their computer systems and establishing a connection to the NSCC. SEC estimates that each year approximately half of all mutual fund orders are submitted directly to mutual funds through their transfer agents and the other half are submitted to funds through the NSCC. Intermediaries and funds that do not currently use the NSCC would have to either establish a direct communications link to the NSCC or make arrangements with other mutual funds or intermediaries who would be willing to transmit their orders to the NSCC on their behalf. Some pension plan record keepers are concerned that the costs of establishing a direct connection to the NSCC would be unaffordable. Another concern about the Clearinghouse proposal is that the NSCC may not be able to handle the concentration of orders it would receive just prior to the market close. However, the NSCC’s analysis indicates that its current system capacity is sufficient to handle the increase in transactions. Proponents state that a benefit unique to the Clearinghouse proposal is that it would allow plan record keepers and administrators to process plan participants’ requests for exchanges between different fund families on the same day.
Mutual fund investments represent more than 20 percent of Americans' pension plan assets. Since late 2003, two abusive trading practices in mutual funds have come to light. Late trading allowed some investors to illegally place orders for funds after the close of trading. Market timing allowed some investors to take advantage of temporary disparities between the value of a fund and the value of its underlying assets despite stated policies against such trading. The Securities and Exchange Commission (SEC) has proposed regulations intended to stop late trading and reduce market timing. We were asked to (1) report on what is known about how these practices have affected the value of retirement savings of pension plan participants, (2) describe the actions taken by SEC and the Department of Labor (DOL) to address these practices, and (3) explain how plan participants may be affected by SEC's proposed regulations. The cost of late trading and market timing to long-term investors in mutual funds is unclear; however, it does not appear that these abuses affected pension plan participants more than other investors. While individual instances of abusive trading may not have had a noticeable effect on the value of funds held by long-term investors, the cumulative effect of such trading may be significant. Among 34 brokerage firms surveyed by the SEC, more than 25 percent reported instances of illegal late trading at their firms. However, numerous fund intermediaries that are not regulated by the SEC may also have permitted late trading. Trading abuses can be difficult to identify because, among other reasons, fund brokers aggregate the transactions of their clients and often do not share details of individual transactions with mutual fund companies. Ultimately, the effect of trading abuses on the savings of plan participants and other long-term fund shareholders is a function of which funds they invested in and for how long. SEC and DOL have taken steps to address abusive trading in mutual funds, and SEC has proposed regulations that aim to stop late trading and curb market timing. SEC and DOL are investigating these trading abuses, and SEC has already reached several settlements. DOL has issued guidance to pension plan sponsors and other plan fiduciaries on how they can fulfill their legal requirements to act "prudently" and in the best interests of plan participants who invest in mutual funds. To stop late trading, SEC has proposed that all fund transactions be received by mutual funds or designated processors before 4:00 p.m. eastern time in order for investors to receive the same day's price. To curb short-term trading, including market timing, SEC has proposed regulations that would impose a 2-percent fee on the proceeds of fund shares redeemed within 5 business days of purchase. DOL is not involved in the process of drafting these regulations because it does not regulate mutual funds, but it is considering how the proposals would affect pension plans. To the extent that SEC's proposed regulations stop late trading and market timing, they would benefit long-term mutual fund investors; however, the new rules could also affect such investors adversely, and pension plan participants more than others. The new regulations are expected to increase costs (e.g., for technology upgrades) that would be passed on to long-term mutual fund investors. In addition, plan participants could be distinctly affected by the late trading proposal because it creates potential complications in processing certain transactions unique to pension plans (e.g., loans). Further, the market timing proposal may result in plan participants paying fees intended to deter market timing, even when there is clearly no intent to engage in abusive trading. SEC officials told us that they are considering changes and alternatives to the proposed regulations that would address these concerns.
Information security is a critical consideration for any agency that depends on information systems and computer networks to carry out its mission and is especially important for a federal agency such as FDA, which collects, processes, and stores sensitive information on drugs and other products pending approval; the safety of food, drug, and medical products; and scientific research to inform regulatory decisions. While the use of interconnected electronic information systems allows the agency to accomplish its mission more quickly and effectively, this also exposes FDA’s information to threats from sources internal and external to the agency. Internal threats can include errors, as well as fraudulent or malevolent acts by employees or contractors working within the agency. External threats include the ever-growing number of cyber-based attacks that can come from a variety of sources, including hackers, criminals, foreign nations, terrorists, and other adversarial groups. Potential cyber attackers have a variety of techniques at their disposal, which can vastly enhance the reach and impact of their actions. For example, these attackers do not need to be physically close to their targets, their attacks can easily cross state and national borders, and they can more readily preserve their anonymity. Additionally, advanced persistent threats—where an adversary that possesses sophisticated levels of expertise and significant resources can use physical and cyber methods to achieve its objectives—pose increasing risks. Further, the interconnectivity among information systems presents increasing opportunities for such attacks. This risk is highlighted by the rising number of reported security incidents at federal agencies. Specifically, the number of incidents reported by federal agencies to the United States Computer Emergency Readiness Team (US-CERT) has increased dramatically in recent years. It rose from 5,503 in fiscal year 2006 to 77,183 in fiscal year 2015. Compounding the growing number and types of threats are the deficiencies in security controls on the information systems at federal agencies. These weaknesses have resulted in vulnerabilities in systems and information and continue to place assets at risk of inadvertent or deliberate misuse; information at risk of unauthorized access, modification, or destruction; and critical operations at risk of disruption. Accordingly, we have designated federal information security as a government-wide high-risk area since 1997, and in 2003 expanded this area to include computerized systems supporting the nation’s critical infrastructure. In February 2015, we further expanded this area to include protecting the privacy of personal information that is collected, maintained, and shared by both federal and nonfederal entities. In September 2015, we reported that more than half of the 24 major federal agencies continued to experience weakness in the controls intended to preserve confidentiality—preventing unauthorized access to information and systems; integrity—preventing unauthorized modification or destruction of information, including access and configuration controls; and availability—ensuring timely and reliable access to and use of information when needed, such as contingency planning controls. To improve federal information security, the Federal Information Security Modernization Act (FISMA) was enacted in 2014. The law is intended to address the increasing sophistication of cybersecurity attacks, promote the use of automated security tools with the ability to continuously monitor and diagnose the security posture of federal agencies, and provide for improved oversight of federal agencies’ information security programs. FISMA provides a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. Among other things, FISMA requires federal agencies to develop, document, and implement an agency-wide information security program. Agencies are to carry this out using a risk-based approach to information security management. Such a program includes developing and implementing cost-effective security policies, plans, and procedures; assessing risk; providing specialized training; testing and evaluating the effectiveness of controls; planning, implementing, evaluating, and documenting remedial actions to address information security deficiencies; and ensuring continuity of operations. FISMA also gives the National Institute of Standards and Technology (NIST) responsibility for developing standards and guidelines that include minimum information security requirements. To this end, NIST has issued numerous publications to provide guidance for agencies in implementing an information security program. These include, among others, the NIST Federal Information Processing Standard (FIPS) 199, which provides requirements for agencies to categorize their systems and information, and NIST Special Publication (SP) 800-53, which provides guidance on the selection and implementation of information security and privacy controls for systems. FDA is a consumer protection agency with broad regulatory authority charged with protecting public health by ensuring the safety, effectiveness, and security of human veterinary drugs, biological products, and medical devices; ensuring the safety of foods, cosmetics, and radiation-emitting products; and regulating tobacco products. FDA’s mission includes helping to speed innovations that make foods safer and medicines and medical devices safer and more effective; ensuring members of the public have accurate, science-based information they need to use medicines, devices, and foods to improve their health; regulating the manufacture, marketing, and distribution of tobacco products and reducing tobacco use by minors; and addressing the nation’s counterterrorism capability by ensuring the security of the supply of foods and medical products. FDA performs regulatory activities that include reviewing and approving new drugs and certain medical products; inspecting manufacturing facilities for compliance with regulations and good manufacturing practices; and conducting post-market surveillance of food, drug, and medical products to ensure they are safe; tracking and identifying the source of outbreaks of foodborne illnesses; and issuing recall notices and safety alerts for products that threaten the public health. According to FDA, its fiscal year 2015 appropriation was $4.5 billion. The agency is headed by a Commissioner and is staffed by more than 14,000 employees across the United States and around the world. FDA consists of its Office of the Commissioner and four directorates that oversee the agency’s core functions. These directorates are the Office of Foods and Veterinary Medicine, Office of Global Regulatory Operations and Policy, Office of Medical Products and Tobacco, and Office of Operations. Within these directorates are offices and centers that focus on core parts of the agency’s mission. Examples of these offices and centers are shown in table 1. FDA relies extensively on IT to fulfill its mission and support related administrative needs. Among the more than 80 systems reported in its FISMA inventory, the agency has systems dedicated to supporting its product review and evaluation activities, regulatory compliance functions, and product safety monitoring activities, as well as systems to support administrative processes. All of these systems are supported by an IT infrastructure that includes network components, critical servers, and data centers. In fiscal year 2015, the agency reported spending $585 million on IT, of which approximately $12 million (or about 2 percent of the IT budget) was for information security. This percentage is lower than the approximately 8 percent of their fiscal year 2015 IT spending that the 23 civilian agencies covered by the Chief Financial Officers Act reportedly spent on information security. For fiscal year 2016, FDA requested $640 million for IT and $16 million for information security. In addition, FDA indicated that real-time connectivity and access to data and information is essential for its daily operations, as well as its interactions with the public and other partners. These factors depend on high-quality, high-availability, and high-performing data networks, server and application infrastructure, communications services, simple and complex computer applications, mobile workforce capabilities, and rapid and responsive service delivery. Examples of the processing activities that key FDA systems perform in supporting of the agency’s mission are listed below: Support and facilitate post-market product safety surveillance of human drugs, biologics, devices, and combination products. Provide a data repository for collecting, storing, viewing, analyzing, reporting, and tracking the receipt of adverse event data or medication errors. Establish a single gateway or communications portal for accepting electronic submissions or allowing authorized users to view or obtain information. Examples of electronic submissions include industry- provided trade secrets, adverse event records, and a multitude of different records related to FDA’s regulatory oversight of regulated products. Provide capabilities for regulatory scientific research, while also supporting FDA’s overall goals and objectives in areas where information technology requires supercomputer-strength computational power. Support FDA’s research and development activities. Provide a platform through which FDA organizations may disseminate FDA-related information to interested parties, including the public, health professionals, regulated industries, and the media. Provide information about the various product areas that FDA regulates (food, drugs, medical devices, cosmetics, etc.), timely advisories (e.g., anticipated disease outbreaks such as the Severe Acute Respiratory Syndrome (SARS), buying medicines online, and LASIK surgery), and other FDA activities. Provide links to related reference materials and opportunities for consumers and industry to interact with the FDA. Provide basic network and security capabilities for the FDA enterprise. Facilitate receipt and review of electronic drug applications. This function includes scans and checks of the validity of drug submissions from industry and making them available for reviewers, as well as providing file shares for storing successful submissions that are to be reviewed. In addition, FDA contractors support data centers and systems that provide, among other things, the network infrastructure for the agency’s systems and its public website. The information handled by these systems includes sensitive or confidential business information on drug submissions and adverse event reports, among other types of information. Accordingly, effective implementation of security controls is necessary to protecting the confidentiality, integrity, and availability of FDA’s information and in preventing the occurrence or lowering the risks of security breaches similar to one the agency experienced in 2013. During that breach, an intruder gained unauthorized access to one FDA system’s user accounts and passwords. Effective controls can help ensure only authorized users (people and processes) access information and systems to lessen the chances of unauthorized disclosures of information, improper changes or modifications to FDA’s information and systems, and system disruptions that could hamper the agency’s ability to perform its mission. To improve the management of FDA’s information systems security and operations, the agency, in fiscal year 2015, consolidated its network and security operations centers to reorganize the Systems Management Center (SMC). According to FDA, the SMC is the central command and control center and is intended to help establish real-time network awareness to forecast, detect, alert, and report events such as security incidents and facilitate the coordination requirements of its Office of Information Management and Technology. In addition, the agency reported that it established a cybersecurity task force to address short- and long-term concerns with protecting its network boundaries. Under FISMA, the Commissioner of FDA is responsible for ensuring the confidentiality, integrity, and availability of the information and systems that support the agency and its operations. FISMA also requires that the agency head delegate to the chief information officer (CIO) the overall responsibility for management of the agency’s IT security program. At FDA, the CIO is responsible for evaluating the overall mission requirements for an IT system or application and ensuring that it complies with FDA IT security policies, guidelines, and standards. The CIO is also responsible for, among other things, ensuring effective implementation of FDA’s IT Security Policy; formally appointing a Chief Information Security Officer (CISO) and ensuring that individual complies with FDA’s IT security regulations and guidelines; ensuring that IT security is included in management planning, programming budgets, and the IT capital planning process; and ensuring that annual security reviews are conducted to include annual review and update of security policies and reporting of IT systems to the Office of Management and Budget (OMB). In addition, FDA’s IT Security Program is headed by the agency’s CISO, who is responsible for ensuring that adequate and appropriate controls are applied to FDA systems for the protection of privacy, and to ensure confidentiality, integrity, and availability, of information. The CISO is to employ security policies and standards for FDA information systems enterprise-wide in accordance with FDA, HHS, OMB, NIST, and other federal security requirements. The CISO also provides guidance on IT system security matters to the Information Systems Security Officers (ISSO) in the center/office they support. At FDA, ISSOs are responsible for ensuring the implementation of adequate system security for each system supporting a particular center or office. Every center or office system is to have an ISSO assigned as the point of contact for security. Among other things, FDA ISSOs’ responsibilities include (1) ensuring that FDA systems are operated, used, maintained, and disposed of in accordance with FDA’s security policies and procedures; (2) ensuring system security plans are completed and maintained; (3) assisting with system authorization; (4) responding to and reporting security incidents; (5) promoting security awareness; and (6) ensuring media handling procedures are followed. FDA has taken steps to safeguard its systems that receive, process, and maintain sensitive data by, for example, implementing policies and procedures for controlling access to and securely configuring those systems. However, a significant number of weaknesses remain in technical controls—including access controls, change controls, and patch management—that jeopardize the confidentiality, integrity, and availability of its systems. An underlying reason for these weaknesses is that FDA had not yet fully implemented an agency-wide information security program to provide reasonable assurance that controls were operating effectively. These shortcomings put FDA systems at increased and unnecessary risk of unauthorized access, use, or modification that could disrupt its operations. To its credit, FDA, during the course of our work, immediately resolved some of the weaknesses identified and provided information on its proposed actions to address the underlying weaknesses in controls. Access controls are designed and implemented to provide reasonable assurance that an agency’s computerized information is reliable. Both logical and physical access controls are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Access controls include those related to (1) protection of system boundaries, (2) identification and authentication of users, (3) authorization of access permissions, (4) encryption of sensitive information, (5) audit and monitoring of system activity, and (6) physical security of facilities. As shown in table 2, weaknesses existed in each of these areas for the systems we reviewed. In a separate report with limited distribution, we describe these weaknesses in more detail, along with associated recommendations. Inadequate design or implementation of access controls increases the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. Boundary protection controls logical connectivity into and out of networks and controls connectivity to and from devices connected to the network. For example, multiple firewalls can be deployed to prevent both outsiders and trusted insiders from gaining unauthorized access to systems, and intrusion detection technologies can be deployed to defend against attacks from the Internet. Unnecessary connectivity to an organization’s network increases not only the number of access paths that must be managed and the complexity of the task, but also the risk of unauthorized access in a shared environment. NIST recommends that agencies implement subnetworks to separate publicly accessible system components from their internal networks. NIST also states that agencies should provide adequate protection for networks and employ information control policies and enforcement mechanisms to control the flow of information between designated sources and destinations within information systems. Similarly, NIST recommends that organizations monitor and control communications at information systems’ external boundaries and at key internal boundaries within a system. FDA did not always adequately ensure that its network boundaries were sufficiently segregated. For example, the contractor supporting the agency’s public-facing website did not isolate the agency’s network from its own network and that of its other customers, which included non-FDA customers. In addition, the contractor did not configure firewall rules to restrict access into FDA’s internal network. In another example, FDA did not sufficiently restrict inbound connections from one of its untrusted networks and isolate that network from its internal network. The network was untrusted because the agency had not developed and implemented risk management controls for the system. As a result, it poses increased risks to other agency systems. Further, as illustrated in the following examples, FDA did not always implement other boundary controls. Network devices at the agency’s field locations were not properly configured and allowed all remote access protocols, such as the unsecure telnet protocol. Routers at certain international locations were not configured to restrict inbound management traffic from untrusted sites. Host-based firewalls for four key systems and some workstations were not effectively configured to permit only necessary traffic and provide protection from malicious activity. As a result, sensitive public health, proprietary business, and personal information maintained by the agency were at increased risk of compromise due to inadequate separation of the service provider’s network from FDA’s network, inadequate separation of the untrusted network from the agency’s network, and weaknesses in other boundary controls. A computer system must be able to identify and authenticate different users so that activities on the system can be linked to a specific individual. When an organization assigns a unique user account to a specific user, the system is able to distinguish that user from another—a process called identification. The system must also establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. The combination of identification and authentication—such as a user account/password combination—provides the basis for establishing individual accountability and for controlling access to the system. NIST SP 800-53 recommends that password management controls should be established for information systems that include minimum password complexity requirements, password lifetime restrictions, prohibitions on password reuse, and user accounts to be temporarily locked out after a certain number of failed login attempts during a specified period of time. Further, FDA password policy outlines requirements consistent with this guidance. NIST also states that agencies can satisfy certain identification and authentication requirements by complying with the requirements in Homeland Security Presidential Directive 12 and using multifactor authentication such as personal identity verification cards. Multifactor authentication requires the use of two or more different factors to achieve authentication. The factors are defined as something you know (e.g., a password or a personal identification number); something you have (e.g., cryptographic identification device or token); or something you are (e.g., biometric). FDA implemented personal identity verification cards for multifactor authentication; however, the agency did not always implement strong password controls in accordance with its security policies and NIST guidance on five of the seven systems we reviewed. For example, three local accounts on a database server which contained certificates used to encrypt industry partner submission packages had passwords that had not been changed in more than 5 years. In addition, several service accounts for servers with access to sensitive industry partner regulatory submissions had passwords set to never expire. Further, a Windows administrator’s non-privileged account was unnecessarily elevated to a privileged account by being part of an administrators group. These accounts are used to administer users’ logical access inside FDA mission-critical systems that process confidential business information or trade secrets such as that for drug submissions and adverse event reporting. In another example, the password to a service account for synchronizing user passwords was set to never expire and had not been changed in the last 6 years. In addition, FDA did not always implement password controls on certain network devices. For example, password management settings were set to default values on two network devices that delivered web applications to FDA users. These default settings were for local accounts, including web administrator and root accounts, and included minimum password lengths set to six characters, with no requirements for password complexity, maximum password lifetime days, password history, and invalid attempts. In another example, a user account password for a network management server that monitors and maintains a history of network devices’ hardware and software changes had not been changed since January 6, 2011. Without implementing strong password requirements, increased risk exists that passwords could be guessed, permitting unauthorized access to FDA systems. Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. For example, operating systems have built-in authorization features such as permissions for files and folders. Network devices, such as routers, have access control lists that can be used to authorize a user who can access and perform certain actions on the device. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and information. This principle means that a user is granted only those access rights and permissions needed to perform official duties. To improve authorization controls, the Federal CIO instructed agencies, as part of the Cybersecurity Sprint, to tighten policies and practices of privileged users. These steps included, for example, minimizing the number of privileged users and limiting functions that can be performed when using privileged accounts. To avoid unintentionally authorizing user access to sensitive files and directories, an agency must give careful consideration to its assignment of rights and permissions. NIST Special Publication 800-53 recommends that agencies should grant user accounts only those privileges required for the users to perform their job functions. Additionally, FDA policy states that access to sensitive information must be restricted and based on the concept of need-to-know. Although FDA has developed and documented access control requirements based on least privilege and need-to-know principles, users were granted excessive permissions that were not needed for their official duties. These permissions enabled administrators and users who did not need such permissions with the authority to read, and in some cases, write and modify submissions that could contain sensitive or confidential business information on drug submissions or adverse event reporting, as illustrated below. Forty-nine administrators and users with access to 392 production servers had, by default, unnecessary access to file shares containing industry submissions on adverse events. A group account allowed 753 users unneeded access to adverse event data submissions. Ninety-two desktop users, via a group account, had unauthenticated access to one key system’s file shares. 4,534 users, which included regulatory reviewers and project managers, had uncontrolled “read access” to file shares on the system that handles sensitive regulatory drug and biologic product submissions. According to FDA, the high number of users with access was necessary due to the high volume of regulatory submissions reviewed daily, which regularly exceeds 1,500 per day, and because staff must often access multiple sponsor submissions in order to complete their regulatory review in a timely manner. However, for the data we reviewed, only about 2,400 users per month accessed these files, compared with the 4,534 users who were granted access. Moreover, FDA did not restrict access to privileged users groups by, for example, differentiating high-valued submission assets from low- valued ones, even though the system stored highly sensitive industry trade secret information. In addition, for this same system, FDA allowed 39 users in the administration group and 104 users in the staff group to have read, write, and modify privileges to the submission files. The server can be accessed without a user interface and FDA does not have visibility of users’ access to the submission files on the server. As a result, FDA was at increased risk that users could inadvertently or deliberately modify these files and jeopardize the integrity of the submitted information. Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. Cryptographic tools help control access to information by making it unintelligible to unauthorized users and by protecting the integrity of transmitted or stored information. A basic element of cryptography is encryption. Encryption is the conversion of data into a form, called a cipher text, which cannot be easily understood. Encryption can be used to provide basic data confidentiality and integrity by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. NIST SP 800-53 states that agencies should use encryption to protect the confidentiality of remote access sessions and they should encrypt sessions between host systems. The NIST standard for an encryption algorithm is Federal Information Processing Standard (FIPS) 140-2. FDA did not always ensure that sensitive data were effectively encrypted when transmitted or stored. For example, 59 network devices we reviewed had weak non-FIPS-compliant algorithms to encrypt user passwords. In addition, a web server supporting the receipt of industry submissions and a database server storing certificates to support secure connections for receiving submissions used non-FIPS-compliant algorithms to encrypt passwords. Furthermore, the web server’s password file was encrypted by an algorithm that was outdated and had been withdrawn by NIST over 10 years ago. As a result of using weak encryption algorithms, FDA is at increased risk that user passwords may be easier to crack and used by unauthorized individuals to gain access to systems and sensitive information. To establish individual accountability, monitor compliance with security policies, and investigate security violations, agencies need to determine what, when, and by whom specific actions have been taken on a system. Agencies can accomplish this by implementing system or security software that provides an audit trail (a log of system activity) that is used to determine the source of a transaction or attempted transaction and to monitor a user’s activities. Audit and monitoring, key components of risk management, involve the regular collection, review, and analysis of auditable events for indications of inappropriate or unusual activity, and the appropriate investigation and reporting of such activity. Audit and monitoring controls can help security professionals routinely assess computer security, perform investigations during and after an attack, and even recognize an ongoing attack. Audit and monitoring technologies include network- and host-based intrusion detection systems, audit logging, security event correlation tools, and computer forensics. NIST guidelines state that agencies should retain sufficient audit logs to allow monitoring of key activities, provide support for after- the-fact investigation of security incidents, and meet agency information retention requirements. FDA did not always implement and integrate auditing and monitoring for the seven systems we reviewed. For example, the agency did not have network monitoring visibility across its entire network. Specifically, it did not monitor IT assets used by a contractor supporting the system that provides the agency’s Internet and public network. In addition, the agency did not always audit or monitor system activity on IT assets for networks supporting scientific research and high-performance computing. The agency also did not always retain audit logs to allow monitoring of key activities and provide support for after-the-fact investigation of security incidents. To illustrate, databases supporting drug submissions and adverse event reporting did not have logging enabled for monitoring the use of special system privileges such as alter, create, and grant. Further, FDA did not retain all records of evidence related to a 2013 security breach from an external attack on an FDA Internet application that allowed the attacker to gain access to a backend database and exfiltrate sensitive users account information. Specifically, it did not retain digital forensics data related to the attack commands and the review of dates and times of files and database entries relevant to data exfiltration of users’ account data. Such information could be useful in better understanding what occurred and in preventing future occurrences. As a result, FDA did not have information necessary for monitoring key database activities and supporting after-the-fact investigations of security incidents. In addition, the lack of evidence could prevent the agency from determining what events occurred within its systems and networks, such as lateral movements by an attacker that may occur from initial entry into a network to network discovery, hosts targeting, and data exfiltration activities to external systems. Physical security controls restrict physical access to computer resources and protect them from intentional or unintentional loss or impairment. Adequate physical security controls over computer resources (e.g., computer facilities, network devices such as routers and firewalls, telecommunications equipment, and transmission lines) should be established that are commensurate with the risks of physical damage or access. NIST SP 800-53 recommends that agencies review and update the current physical and environmental protection policy at an organization-defined frequency and conduct an assessment of risks, including the likelihood and magnitude of harm, to the information system and information it processes, stores, or transmits. Consistent with federal guidance, FDA’s Information System Security and Privacy Guide states that physical and environmental protection policies are to be reviewed and updated every 3 years. In addition, the agency’s policies for its facilities state that annual physical security reviews are to be conducted. These reviews are to include, among other things, reviewing security measures in effect to compensate for any noncompliance with requirements, and corrective actions initiated or planned to eliminate deficient conditions. While FDA developed and documented physical security policies for its facilities, they had not been reviewed and updated for about 14 years. For example, the physical security policy for its headquarters facilities was dated February 2001, and the physical security policy for field activities was dated October 2000. Neither of these policies had been reviewed and updated since they were established, even though the agency’s policy requires this to occur every 3 years. In addition, the agency had not conducted required annual physical security reviews of three of its data center facilities. FDA only provided documentation to support that it had reviewed one of them, which occurred in July 2013 and was not within the annual requirement. According to FDA’s CISO and a policy analyst, gaps in reviewing and updating policies and procedures were due to personnel resource constraints and a lack of a streamlined process to review policy and procedures at the agency. As a result, FDA has diminished assurance that its computing resources are protected from inadvertent or deliberate misuse or damage. In addition to access controls, other important controls should be in place to provide reasonable assurance that the confidentiality, integrity, and availability of an agency’s information is protected. These controls include policies, procedures, and techniques for (1) implementing personnel security, such as background investigations, (2) managing and implementing system configurations, (3) effectively planning for system contingencies, and (4) developing and implementing procedures for disposing of media containing sensitive information. While FDA conducted background investigations according to its policy, weaknesses in other controls increased the risk of unauthorized use, disclosure, modification, or loss of the FDA’s mission-sensitive information. The greatest harm or disruption to a system can often come from the actions, both intentional and unintentional, of individuals. These intentional and unintentional actions can be reduced through the implementation of security controls over personnel. Background checks should be done prior to an individual’s authorization to access information systems, and personnel in sensitive positions should be periodically rescreened. Furthermore, FDA policy requires positions to be designated by sensitivity and risk level, and describes requirements for conducting background investigations for employees and contractors, including periodic reinvestigations of individuals in positions of higher risk or sensitivity. FDA conducted background investigations for the employees and contractors we reviewed. Specifically, each of the 14 employees and contractors we selected had up-to-date background investigations that were consistent with the risk designation of their positions. As a result, FDA reduced its risk that it has employed or contracted for individuals with unsuitable backgrounds for accessing its systems. Configuration management is an important control that involves the identification and management of security features for all hardware and software components of an information system at a given point and systematically controls changes to that configuration during the system’s life cycle. Configuration management involves, among other things, (1) verifying the correctness of the security settings in the operating systems, applications, or computing and network devices and (2) obtaining reasonable assurance that systems are configured and operating securely and as intended. In addition, establishing controls over the modification of information system components and related documentation helps to prevent unauthorized changes and ensure that only authorized systems and related program modifications are implemented. This is accomplished by instituting policies, procedures, and techniques that help make sure that all hardware, software, and firmware programs and program modifications have been properly authorized, tested, and approved. According to NIST SP 800-53, configuration management activities should include documenting approved configuration-controlled changes to information systems, retaining and reviewing records of the changes, auditing those records, and coordinating and providing oversight for configuration change control activities through a mechanism such as a change control board. Patch management, a component of configuration management, is important for mitigating the risks associated with known software vulnerabilities. When a software vulnerability is discovered, the software vendor may develop and distribute a patch or work-around to mitigate the vulnerability. Without the patch, an attacker can exploit the vulnerability to read, modify, or delete sensitive information; disrupt operations; or launch attacks against other systems. Outdated and unsupported software is more vulnerable to attack and exploitation because vendors may no longer provide updates, including security updates, to correct software flaws. FDA has developed, documented, and established policies and procedures to manage configuration changes. In addition, for the systems we reviewed, FDA officials demonstrated that system changes were first requested, tracked, and approved at the system level prior to being forwarded via an automated tool to FDA’s change control board as required by policy. However, FDA officials could not provide documentation to demonstrate that emergency changes to software code to remediate security vulnerabilities were tested, validated, and documented in response to the 2013 breach of its Internet-facing web application. Further, the agency did not always implement secure configuration settings for its systems. For example: FDA did not appropriately configure 336 devices, which could prevent proper identity enforcement of these network devices and could allow unauthorized access to other networks and devices. FDA used out-of-date and unsupported software on servers storing sensitive data on industry partner regulatory submissions for several of the systems we reviewed. In addition, Windows file share servers and other application servers on several systems we reviewed were out of date and had reached end-of-life, in some cases for more than 4 years past the support date. Two firewalls for managing contractors’ access to FDA’s network had operating system versions that were close to end-of-life for support, and FDA had no mitigation plans in place to manage this risk. Similarly, FDA has developed, documented, and established a policy for managing patches that includes time frames for applying patches based on risk, and emergency and out-of-cycle patches within 48 hours of discovery. However, FDA did not always document emergency changes to software code on an application that supported its Internet services. These changes were made in response to an external Internet attack that resulted in a breach of the system’s user account data. In addition, software security updates and patches were not always installed to address known security vulnerabilities, nor were they timely. For example: FDA had not applied security updates and patches for network devices, switches, firewalls, specialized network devices, and servers, as well as contractor-operated network devices, in accordance with NIST’s Common Vulnerability Scoring System (CVSS) guidelines for patching devices. CVSS prescribes that patches be installed within 30 days for critical or high-risk vulnerabilities, 60 days for moderate-risk vulnerabilities, and 90 days for low-risk vulnerabilities. FDA’s policy also requires that they follow these patching time frames. However, hundreds of these devices had not been updated with the latest patches in over 3 years. The agency had not patched 25 servers supporting its infrastructure. For example, one sever had not been patched for 6 months, from February to August of 2015. FDA had not applied critical security patches to 74 of 82 host virtual servers supporting its infrastructure. In some cases these patches contained major updates to fix multiple security vulnerabilities. Various file share servers for three FDA systems we reviewed had not been patched since 2009. Without proper implementation of configuration management policies and procedures and adequate security controls, FDA systems are susceptible to many known vulnerabilities. Losing the capability to process, retrieve, and protect electronically maintained information can significantly affect an agency’s ability to accomplish its mission. If contingency planning is inadequate, even relatively minor interruptions can result in lost or incorrectly processed data, which can cause financial losses, expensive recovery efforts, and inaccurate or incomplete information. Contingency planning consists of interim measures to recover information system services after a disruption. Interim measures may include relocation of information systems and operations to an alternate site, recovery of information system functions using alternate equipment, or performance of information system functions using manual methods. NIST SP 800-53 recommends that agencies establish a contingency planning policy in the event of unplanned disruptions and provide contingency training and exercises at an agency-defined frequency, among other things. In addition, NIST SP 800-34 recommends that a test plan should be designed and tested to examine applicable contingency planning elements such as notification procedures and system recovery on an alternate platform from backup media to validate the contingency capability. Further, FDA policy also requires functional testing of its contingency plans annually. Consistent with NIST guidelines, FDA’s Information System Security and Privacy Guide states that contingency planning policies are to be updated every 3 years, while information system contingency plans are to be reviewed annually. FDA’s policy also requires that contingency plans be tested on an annual basis. However, FDA did not follow its own requirements for updating and reviewing contingency policy and plans. For example, FDA’s contingency planning policy was established in 2007 but was still marked as a draft document and had yet to be reviewed and updated. Further, FDA did not review, at least annually, the contingency plans for six of the seven applications and general support systems that we reviewed during fiscal year 2015 and had not developed and documented a contingency plan for the seventh system. In addition, FDA did not adequately test five of the six contingency plans we reviewed. For example: For two major applications, FDA conducted procedures to mitigate system disruptions and documented those activities as tests. However, the actions performed to mitigate disruptions were not based on planned tests. A planned migration was conducted for a general support system to transfer operations to a facility. However, this migration was not the result of a planned contingency test. The plans for two general support systems had not been tested since 2013. However, the tests did appropriately assess elements such as notification procedures, and system recovery. FDA staff attributed these weaknesses to the lack of a streamlined process for reviewing policies and procedures, and personnel resource constraints such as the lack of contracted staff to support FDA contingency planning and operations during an organizational transition. By not finalizing its contingency planning policy and not annually reviewing and testing contingency plans, FDA has reduced assurance that it has implemented controls necessary for effectively continuing operations in the event of a disruption. The destruction of media and its disposal are key to ensuring the confidentiality of information. Media can include magnetic tapes, optical disks (such as compact disks), and hard drives. Agencies safeguard used media to ensure that the information they contain is appropriately controlled or disposed of. Media that is improperly disposed of can lead to the inappropriate or inadvertent disclosure of an agency’s sensitive information or the personally identifiable information of its employees and customers. NIST SP 800-53 recommends that agencies sanitize media prior to disposal and employ sanitization mechanisms to ensure information cannot be retrieved or reconstructed. FDA’s policy for sanitizing computer-related storage media, including server backup tapes, states that techniques used to sanitize media can include degaussing, among other things. However, FDA did not sanitize media backup tapes that were being stockpiled for disposal. Specifically, for two data center locations, media tapes were stored outside of servers and scheduled for sanitization, but had yet to be sanitized and disposed of. At one of the two data centers, we observed a number of older tapes, and FDA staff said these tapes were awaiting disposal. Specifically, staff mentioned that the legacy tapes held data from operations in prior location and were in a “holding pattern” and tentatively scheduled for decommission. Similarly, FDA staff from the second data center acknowledged that approximately 900 tapes were also awaiting disposal and that these tapes contained older servers, databases, and files resulting from a migration to updated servers tapes. According to the data center staff, the agency had not developed, documented, and implemented a procedure for sanitizing media, but planned to have a solution by October 2016. Until FDA fully implements a process for media sanitization, the agency is at an increased risk that its sensitive information may not be adequately protected. A key reason for the weaknesses in controls over FDA’s information and information systems is that it has not yet fully implemented its agency- wide information security program to ensure that controls are effectively established and maintained. If an agency does not fully implement its program, security controls may be inadequate or inconsistently applied; responsibilities may be unclear, misunderstood, or improperly implemented; and organizational and system risks may not be assessed and monitored properly. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes a periodic assessment of risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information or information systems; policies and procedures that (1) are based on risk assessments, (2) cost-effectively reduce information security risks to an acceptable level, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; subordinate plans for providing adequate information security for networks, facilities, and systems or a group of information systems, as appropriate; security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency depending on risk, but no less than annually, and that includes testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in information security policies, procedures, or practices; and procedures for detecting, reporting, and responding to security incidents. FDA has taken steps to implement an information security program and manage information security risks for its major applications and general support systems. However, key components of its information security program have not been fully or consistently implemented. According to NIST SP 800-30, risk is determined by identifying potential threats to the organization and vulnerabilities in its systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. Identifying and assessing information security risks are essential to determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that the policies and controls operate as intended. FDA policy requires that risk assessment results for its systems be reviewed annually, and risk assessments be updated prior to issuing a new authority to operate, whenever there are significant system changes, or every 3 years. FDA’s assessment of risk is conducted as part of its security assessments. Although FDA assessed risk for six of the seven systems we reviewed, it did not document the likelihood that a particular threat could exploit system vulnerabilities. For example, FDA only identified information system control weaknesses and vulnerabilities for six of the reviewed systems, but did not determine the likelihood and impact of threats to those systems. For the seventh system, FDA did not assess risk or issue a formal authority to operate. Finally, two of the six risk assessments had not been reviewed annually. During the course of our work, FDA completed the annual review of the risk assessment for one of the two systems, and we have verified this action. However, until FDA completes comprehensive risk assessments and reviews them annually, the agency will have less assurance that it has identified the necessary controls to protect its assets. A key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern the security over an agency’s computing environment. Information security policy is essential to establishing roles, responsibilities, and requirements necessary for implementing an information security program. The supporting procedures provide the information and guidance on implementing the policies. According to NIST, an agency should develop policies and procedures for each of the NIST families of security controls to facilitate the implementation of the controls. Additionally, HHS and FDA policy require that policies be reviewed every 3 years to ensure that they are sufficient and consistent with federal requirements. FDA generally took steps to develop and document policies and procedures for its information security program, but did not always document them or ensure procedures were complete. For example, while the agency has developed policies to cover 17 of 18 NIST control families, it did not develop one for system maintenance. In addition, the agency did not develop or document procedures for implementing controls in 8 of the 18 control families. The 8 control families were Audit and Accountability, Identification and Authentication, Maintenance, Media Protection, Physical and Environmental Protection, Security Planning, Systems Communication and Protection, and System Information and Integrity. Of the procedures for 10 control families that FDA provided, 3 were complete. However, procedures for 7 families were incomplete and did not include steps suggested by NIST. For example, procedures for security awareness and training did not include procedures for covering role-based training, and those for assessment and authorization did not address continuous monitoring as recommended by NIST. Further, FDA did not review its policies according to its own requirements. Specifically, 11 of 18 NIST-recommended policies were not reviewed within the agency-defined frequency of 3 years. For example, the agency’s personnel security policy was last reviewed in 1986. Policies for other controls such as those for access controls, identification and authentication, and incident response had not been reviewed in at least 7 years. FDA conducted an internal review in 2013 to identify the policies that needed to be reviewed and updated, and had established a plan of actions and milestones for updating them by November 2013. However, the agency did not meet its own deadline for reviewing and updating 11 of the 17 policies it had developed. According to FDA staff, the policies had not been reviewed and updated because the process had been too cumbersome and required a sign-off from a number of stakeholders. FDA’s CISO also stated that they had been understaffed, which led to a large backlog of policies to be reviewed. Having incomplete policies and procedures or not reviewing them reduces FDA’s assurance that roles and responsibilities have been clearly assigned and understood and that personnel have the information needed to implement its policies, which could lessen the agency’s ability to efficiently and effectively protect its information systems. FISMA requires that agencies develop and document system security plans for all major federal information systems. This requirement should be viewed as an essential part of planning adequate, cost-effective security protection for a system. According to NIST, system security plans should provide an overview of the security requirements of the system, and document and describe the security controls and security control enhancements in place or planned for meeting those requirements. NIST also recommends that the plans be reviewed and approved by authorizing officials or designated representatives. NIST states that plans should be reviewed and updated at least annually to ensure that they continue to reflect the correct information about the system such as changes in system owners, interconnections, and authorization status, among other things. Consistent with NIST, HHS and FDA policy require FDA to review system security plans annually. FDA created security plans and generally documented controls for six of the seven applications and general support systems we reviewed. However, the agency did not always ensure that the plans were complete, or that plans were reviewed. For example, FDA did not always fully describe the extent to which controls were implemented for each of the six system security plans we examined. Specifically, it did not document 76 of 83 NIST-required high-impact control enhancements in the security plan for the high-impact system used in reporting adverse events. In addition, the agency did not document the control descriptions for 171 of 262 security controls and control enhancements; specifically, the description of the implementation of 171 security controls and enhancements was left blank in the plan for the system supporting FDA’s infrastructure. The system has an important role in securing the agency’s other systems since 68 of those systems inherit their controls from it. FDA also did not demonstrate that any of the six plans we reviewed were approved or reviewed by authorizing or senior agency officials. According to an information system security officer, these shortfalls were related to deficiencies in their security management tool and a lack of resources. Officials stated that the tool that they used for entering information into system security plans had software flaws, which did not allow them to properly capture system security plan control descriptions; officials stated that they plan to replace the tool but could not give a firm timeline. Until FDA develops and documents a plan for one system supporting its research and updates system security plans to reflect current federal control requirements, the agency lacks assurance that the appropriate controls have been identified for the seven systems we reviewed and increases the likelihood that the controls will not be fully implemented. According to FISMA, an agency-wide information security program must include security awareness training for agency personnel, contractors, and other users of information systems that support the agency’s operations and assets. This training must cover (1) information security risks associated with users’ activities and (2) users’ responsibilities in complying with agency policies and procedures designed to reduce these risks. FISMA also includes requirements for training personnel who have significant responsibilities for information security. According to NIST, agencies should also document and monitor individual information system security training activities, including basic security awareness training and specialized information system security training. Consistent with federal law and guidelines, FDA’s Information System Security and Privacy Control Parameters Guide states that the agency should provide role‐based security‐related training to all personnel with significant information security responsibilities. The agency’s policy also requires that employees with significant security responsibilities participate in role-based training appropriate to their security role before receiving access to the system, when required by system or role changes and every 3 years thereafter. FDA tracked and provided security awareness training in fiscal years 2015 and 2016 to each of the 16 users we selected for review. The agency tracks its user awareness training through a vendor-provided web-based application. According to FDA, it previously provided awareness training to about 98 percent of its users during fiscal year 2015. However, the agency did not always track role-based training for those with significant security responsibilities. For example, FDA’s tracking system only identified 6 of the 16 individuals selected as having received role-based training. According to FDA personnel, the resulting list was not complete because the agency is re-engineering its process for tracking compliance of specialized security training. In addition, it did not fully provide role-based training to those with significant security responsibilities. FDA demonstrated that 6 of the 16 individuals with significant security responsibilities we reviewed received specialized IT training. FDA responded that the remaining 10 individuals were not system administrators who required specialized training. However, 9 of the remaining 10 individuals had significant security responsibilities, which included the deputy chief information security officer and several information systems security officers. According to FDA staff, the agency is currently developing role-based training courses for executives and contracting officer’s representatives, and will update its IT administrator module on or around October 1, 2016. Until FDA implements procedures that provide reasonable assurance that it tracks and provides role-based training to employees with significant information security responsibilities, the agency will have less assurance that staff have the adequate knowledge, skills, and abilities consistent with their roles to protect the confidentiality, integrity, and availability of the information. A key element of an information security program is to test and evaluate policies, procedures, and controls to determine whether they are effective and operating as intended. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies areas of noncompliance and ineffectiveness. FISMA requires that the frequency of tests and evaluations of management, operational, and technical controls be based on risks and occur no less than annually. OMB directs agencies to meet their FISMA-required controls testing by drawing on security control assessment results that include, but are not limited to, continuous monitoring activities. OMB also requires agencies to develop and maintain an information system continuous monitoring (ISCM) strategy and implement an ISCM program in accordance with NIST guidelines. OMB required agencies to develop their ISCM strategies by February 28, 2014. Continuous monitoring of security controls employed within or inherited by the system is an important aspect of managing risk to information from the operation and use of information systems. The objective of continuous monitoring is to determine if the set of deployed security controls continues to be effective over time in light of the inevitable changes that occur to a system and within an agency. Such monitoring is intended to assist in maintaining an ongoing awareness of information security, vulnerabilities, and threats to support agency risk management decisions. The monitoring of security controls using automated support tools can help facilitate continuous monitoring. FDA has taken steps to monitor security controls through bi-weekly vulnerability scanning using automated tools. The agency also conducted annual assessments of its information systems. However, the agency did not fully or annually assess controls for 2 of the 7 systems we reviewed. To illustrate, FDA did not assess any of the security controls for a system supporting its scientific research activities. For the other system, which supports FDA’s IT infrastructure, the agency had not conducted an assessment since 2013, thus not meeting FISMA’s requirement to assess controls at least annually. Further, we found that FDA has not developed and documented a continuous monitoring strategy for its information systems. HHS’s inspector general previously reported this weakness in fiscal years 2013 and 2014. According to FDA staff, the agency plans to assess the infrastructure system during fiscal year 2016 since the system was being restructured during fiscal year 2015. In addition, the agency plans to implement a pilot program for continuous monitoring in August 2016. Further, the agency plans to implement the Department of Homeland Security’s Continuous Diagnostics and Mitigation tool in 2016 to improve continuous monitoring of its IT assets. Until it fully tests controls for all systems and develops and documents a continuous monitoring strategy, FDA has less assurance that controls over its information and information systems are in place and operating as intended. FISMA requires that agency-wide information security programs include a process for planning, implementing, evaluating, and documenting remedial actions to address any deficiencies in the information security policies, procedures, and practices of the agency. Agencies should establish procedures to reasonably ensure that all information security control weaknesses, regardless of how or by whom they are identified, are addressed through the agency’s remediation processes. For each identified control weakness, the agency is to develop and implement a plan of actions and milestones (POA&M) based on findings from security control assessments, security impact analyses, continuous monitoring of activities, audit reports, and other sources. When considering appropriate corrective actions to be taken, the agency should, to the extent possible, consider the potential agency-wide implications and design appropriate corrective actions to systemically address the deficiency. FDA’s Plans of Action and Milestones Guide is generally consistent with federal guidance, and the agency’s guide specifically requires that high- risk weaknesses be corrected within 60 days. FDA had also generally developed and documented POA&Ms for addressing security control weaknesses and made efforts to consider agency-wide implications of security weaknesses. However, it did not always complete remedial actions in a timely manner in accordance with the agency’s established deadlines or risk requirements. To illustrate, for the seven major applications and general support systems we examined, 183 of 611 (roughly 30 percent) of the POA&Ms had not been remedied by their scheduled completion date, 30 of which were identified as high risk and not corrected within the agency-defined requirement of 60 days. Of the 183 delayed POA&Ms, 102 had a scheduled completion date of 2013 or earlier. As a further example, FDA’s remedial action plans listed two high-risk weaknesses identified by its Office of Inspector General in 2006 and 2007, but FDA had not mitigated these weaknesses even though the agency had planned completion dates in 2012. FDA personnel stated that they faced challenges in remediating POA&Ms in a timely manner and based on risk. According to FDA personnel, there was a large volume of open POA&Ms and insufficient resources, which delayed addressing weaknesses in a timely manner: as of the first quarter of 2015, FDA had 1,265 open POA&Ms. FDA staff also noted that risk is considered in prioritizing remediation, but that other factors such as available resources and business impacts are also considered. FDA personnel stated that, because of the large number of open POA&Ms, they will go after “low-hanging fruit,” favoring remediation of a larger number of POA&Ms over concentrating on high-risk weaknesses. By not resolving identified weaknesses in a timely manner, or in accordance with its own policy, FDA faces an increased likelihood that weaknesses, including high-risk vulnerabilities, will go uncorrected, be exploited, and result in greater harm to agency systems and information. Even with strong information security controls, incidents can still occur. Agencies can reduce the risks associated with these events by detecting and promptly responding before significant damage is done. A key element of an effective incident response program includes implementing comprehensive policies, procedures, and controls in order to rapidly detect incidents, minimize loss and destruction, mitigate the weaknesses that were exploited, and restore computing services. NIST SP 800-53 recommends that agencies review and update their incident response policy and procedures at an organization-defined frequency. NIST also recommends that an organization coordinate its incident handling activities with contingency planning activities so that during a severe incident, the agency has actions in place to keep its business operational. NIST further recommends that agencies implement lessons learned from ongoing incident handling activities into incident response procedures, training, and testing, and implements the resulting changes accordingly. While FDA has developed and documented an incident response policy, the agency did not comply with its own policy of updating its incident response policy every 3 years. The policy has not been updated since it was created in January 2007. Further, neither FDA’s incident response policy nor its procedures require or describe steps for coordinating incident response activities with planning for contingencies or system disruptions. The agency also did not update its incident response procedures using the results of lessons learned from prior incident response table top exercises we examined. For example, results from a 2012 table top exercise indicated that FDA should better train its employees so that newer, less-experienced staff are better able to respond to significant cyber incidents, and that FDA should update its procedures to include training requirements. However, the lessons learned were not incorporated into FDA’s incident response procedures. Without effective incident response practices in place, FDA has reduced assurance that its systems and information are protected and that it can respond to incidents. In response to our findings, FDA staff mentioned that the agency is in the process of incorporating lessons learned from incident handling activities into its incident response procedures, training, and testing. In addition, the agency stated that it is taking various steps to address incident response based on our feedback from previous surveys and data requests. The agency stated that it has discontinued its incident response standard operating procedure and was developing a new one based on NIST SP 800-61. The agency’s staff also mentioned that personnel will undergo security training and that FDA is piloting various products to improve the agency’s overall security posture, including incident response. We have not yet verified that the agency has implemented these actions, but such actions could improve FDA’s incident response capability. Although FDA has implemented numerous controls and taken steps intended to protect its information and information systems, pervasive control weaknesses continue to jeopardize the confidentiality, integrity, and availability of its sensitive information. In fiscal year 2015, the agency centralized the management and location of its network and security operations with intended goals that include establishing real-time network awareness and improved incident detection. The agency also immediately resolved some of the weaknesses we identified during this review. Nonetheless, significant weaknesses in controls for preventing or limiting unauthorized access to its systems and information, as well as weaknesses in other controls, such as those for ensuring that software and hardware are updated and securely configured and that sensitive media is disposed of, put FDA’s systems at risk. This is significant considering that these systems handle proprietary business data from companies in multiple industries and sensitive public health data. An underlying cause for many of these weaknesses is that FDA has not fully implemented its agency-wide information security program, such as developing and documenting appropriate policies and procedures, ensuring security controls are tested effectively, remediating weaknesses in a timely manner, and planning for contingencies or system disruptions and effectively managing risks. The widespread weaknesses in technical controls and the incomplete implementation of program elements suggest that the agency has not made effective information security a high enough priority. Until FDA implements these practices and controls, it will have limited assurance that its information and information systems are adequately protected against unauthorized access, disclosure, modification, or loss. To effectively implement key elements of the Food and Drug Administration’s (FDA) information security program, we are recommending that the Secretary of Health and Human Services direct the Commissioner of FDA to implement the following 15 recommendations: 1. Complete a risk assessment and authorization to operate for one FDA system. 2. Ensure that completed risk assessments for six systems reviewed address the likelihood and impact of threats to FDA. 3. Develop a policy for system maintenance. 4. Develop procedures for the following 8 security control families: Audit and Accountability, Identification and Authentication, Maintenance, Media Protection, Physical and Environmental Protection, Security Planning, Systems Communication and Protection, and System Information and Integrity. 5. Enhance procedures for the following 7 security control families: Access Control, Awareness and Training, Security Assessment and Authorization, Configuration Management, Program Management, Personnel Security, and System and Services Acquisition. 6. Review and update as needed per FDA’s frequency, the policies for the following 11 security control families: Access Control, Audit and Accountability, Contingency Planning, Identification and Authentication, Incident Response, Media Protection, Physical and Environmental Protection, Security Planning, Personnel Security, System and Services Acquisition, and System and Information Integrity. 7. Develop and document a security plan for one system supporting FDA’s scientific research. 8. Update security plans to ensure the plans fully and accurately document the controls selected and intended for protecting each of the six systems. 9. Review and approve security plans for the six systems reviewed at least annually. 10. Implement a process to effectively monitor and track training for personnel with significant security roles and responsibilities. 11. Ensure that personnel with significant security responsibilities receive role-based training. 12. Test controls at least annually for the two systems that support FDA’s scientific research and IT infrastructure. 13. Implement remedial actions in accordance with FDA’s prescribed time frames or update milestones if actions are delayed. 14. Update FDA’s incident response policy in accordance with agency requirements. 15. Update incident response procedures to include (1) instructions for coordinating incident response with contingency planning and (2) lessons learned from incident response tests. We are also making 166 technical recommendations in a separate report with limited distribution. These recommendations address information security weaknesses related to boundary protection, identification and authentication, authorization, cryptography, physical security, configuration management, and media protection. We received written comments on a draft of this report from the Department of Health and Human Services (HHS). In the comments (reprinted in appendix II), the department stated that FDA concurred with our recommendations, has begun implementing several of them, and is actively working to address all the recommendations as quickly and completely as possible. The department also stated that FDA has acquired third-party expertise to assist in these efforts to immediately address the recommendations in our report. The department emphasized its commitment to protecting the public health and proprietary business information at FDA, including by implementing layered defenses and other compensating controls. HHS further noted that FDA has not experienced a major cybersecurity-related breach that exposed industry or public health information and that information security remains a high priority at FDA. The department added that since hiring its CIO in 2015, FDA has undertaken steps to better ensure the prevention, detection, and correction of incidents. These include the development of an IT strategic plan and the restructuring of cybersecurity leadership, among other initiatives. In addition, HHS noted that we did not identify an elevated risk of exposure and/or exfiltration of trade secret and/or other sensitive information. However, this does not accurately reflect the results of our review. As stated in the report, we identified a significant number of weaknesses in technical controls—including access controls, change controls, and patch management—that jeopardize the confidentiality, integrity, and availability of the seven moderate- and high-impact systems we reviewed. Moreover, several of these weaknesses affected FDA’s general support systems, which are connected to numerous systems beyond the ones we reviewed. As previously mentioned, these weaknesses place the seven FDA systems, including those that receive, process, and maintain sensitive industry and public health data, at increased and unnecessary risk of unauthorized access, use, or modification. The department also made additional comments regarding our report and methodology. In particular, it stated that our methodology did not use an industry-standard approach to assessing risk, defined as the likelihood of a given threat source exploiting a particular vulnerability and the resulting significance of the impact of that adverse event on the organization, or quantify this risk in our overall assessment. We did not perform a comprehensive risk assessment of FDA’s information systems and information because that is FDA’s responsibility, not ours. However, we did consider the elements of risk to agency systems and information during our review. For example, as stated in the report, in selecting the seven systems we reviewed, we considered FDA’s categorization of the impact or magnitude of harm to the agency’s operations, assets, and individuals should the confidentiality, integrity, or availability of the systems and the information they contain be compromised. Six of the seven systems we selected were assigned a Federal Information Processing Standard rating of moderate or high impact by FDA, indicating that the loss of confidentiality, integrity, or availability of these systems or the information they contain would have either a serious or severe/catastrophic impact on the organization. We also considered how each control weakness, vulnerability, or program shortcoming we identified could impair or diminish the effectiveness of a security control or be exploited to facilitate unauthorized system activity. Our report identifies numerous weaknesses and vulnerabilities along with their potential impact if the vulnerabilities are exploited. It is also noteworthy that our work determined that for the reviewed systems, FDA had not determined the likelihood and impact of threats to those systems. HHS also stated that our report did not consider other FDA tools, resources, and capabilities designed to prevent, detect, and correct incidents, such as its ability to prevent or mitigate breaches like the one that occurred in October 2013. We recognize that FDA has implemented numerous security controls and key elements of its information security program; however, the weaknesses we identified nevertheless pose increased and unnecessary risk to its systems and information. For example, as noted in our report, FDA had not updated its incident response policy since 2007 or incorporated other key elements. Having a complete and up-to-date incident response capability is essential to ensuring that FDA staff have the knowledge and tools to effectively respond to security incidents, such as breaches. Finally, the department stated that our report does not consistently or clearly distinguish which of the systems reviewed contained sensitive information and which do not. It noted, for example, that FDA’s Scientific Network is a research and development network that does not contain trade secret information. However, as we noted in our report, FDA’s systems operate in an interconnected and networked environment, and the agency had not ensured that the Scientific Network, for example, was adequately isolated from other systems containing sensitive data, nor had it developed and implemented risk management controls for this system. These weaknesses could provide an attacker with a pathway from this less-secure system to other systems containing sensitive public health or proprietary business data. Such weaknesses therefore pose an increased risk to the sensitive information FDA collects and maintains. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to relevant congressional committees, the Secretary of Health and Human Services, the Commissioner of FDA, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov or Dr. Nabajyoti Barkakati at (202) 512-4499 or barkakatin@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objective of our review was to evaluate the extent to which the Food and Drug Administration (FDA) has implemented information security controls to effectively protect the confidentiality, integrity, and availability of its information on selected information systems. To determine the effectiveness of the FDA’s security controls, we gained an understanding of the overall network environment, identified interconnectivity and control points, and examined controls for the agency’s networks and facilities. We reviewed controls over the network infrastructure and selected systems that processed confidential commercial and proprietary business information. We performed our work at FDA headquarters in Silver Spring, Maryland, and at several data centers in Ashburn, Virginia, and Silver Spring, Maryland. We selected a non-generalizable sample of seven systems for review that (1) receive, transmit, and/or process sensitive drug information; (2) are essential to FDA’s mission, support its business processes, and contain or process sensitive proprietary business information; and (3) were assigned a Federal Information Processing Standard rating of moderate or high impact. These systems perform the following support functions: Support and facilitate post-market product safety surveillance of human drugs, biologics, devices, and combination products. Provide a data repository for collecting, storing, viewing, analyzing, reporting, and tracking the receipt of adverse event data or medication errors. Establish a single gateway or communications portal for accepting electronic submissions or allowing authorized users to view or obtain information. Examples of electronic submissions include industry- provided trade secrets, adverse event records, and a multitude of different records related to FDA’s regulatory oversight of regulated products. Provide capabilities for regulatory scientific research, while also supporting FDA’s overall goals and objectives in areas where information technology requires supercomputer-strength computational power. Support FDA’s research and development activities. Provide a platform through which FDA organizations may disseminate FDA-related information to interested parties, including the public, health professionals, regulated industries, and the media. Provide information about the various product areas that FDA regulates (food, drugs, medical devices, cosmetics, etc.), timely advisories (e.g., anticipated disease outbreaks such as the Severe Acute Respiratory Syndrome (SARS), buying medicines online, and LASIK surgery), and other FDA activities. Provide links to related reference materials and opportunities for consumers and industry to interact with the FDA. Provide basic network and security capabilities for the FDA enterprise. Facilitate receipt and review of electronic drug applications, to include scans and checks of the validity of drug submissions from industry and making them available for reviewers, as well as providing file shares for storing successful submissions that are to be reviewed To evaluate FDA’s controls over its information systems, we used our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information; National Institute of Standards and Technology (NIST) standards and guidelines; Department of Health and Human Services guidelines; FDA policies and procedures; and standards and guidelines from relevant security and IT security organizations, such as the National Security Agency and the Center for Internet Security, and the Interagency Security Committee. reviewed firewall configurations, among other things, to determine whether system boundaries had been adequately protected; reviewed the complexity and expiration of password settings to determine if password management was being enforced; analyzed administrative users’ system access permissions to determine whether their authorizations exceeded that necessary to perform their assigned duties; observed configurations for providing secure data transmissions across the network to determine whether sensitive data were being encrypted; reviewed software security settings to determine if modifications of sensitive or critical system resources had been monitored and logged; observed physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; examined configuration settings and access controls for routers, network management servers, switches, and firewalls; inspected key servers and workstations to determine if critical patches had been installed and/or were up-to-date; examined contingency plans for seven systems to determine whether those plans had been developed and tested; reviewed media handling procedures to determine if equipment used for clearing sensitive data had been tested to ensure correct performance; and reviewed personnel clearance procedures to determine whether staff had been properly cleared prior to gaining access to sensitive information or information systems. Using the requirements identified by the Federal Information Security Modernization Act of 2014 (FISMA), which establishes key elements for an effective agency-wide information security program, and associated NIST guidelines, Department of Health and Human Services and Food and Drug Administration Requirements, we evaluated FDA’s information security program by reviewing assessments of risk for six FDA systems to determine whether threats and vulnerabilities were being identified; analyzing FDA policies, procedures, and practices to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; analyzing security plans for six systems to determine if those plans had been documented and updated according to federal guidance; examining the security awareness training for employees and contractors to determine whether they had received training according to federal requirements; examining training records for personnel who have significant responsibilities to determine whether they had received training commensurate with those responsibilities; analyzing FDA’s procedures and results for testing and evaluating security controls to determine whether management, operational, and technical controls for seven systems had been sufficiently tested at least annually and based on risk; reviewing FDA’s implementation of continuous monitoring practices to determine whether the agency had developed and implemented an information system continuous monitoring strategy to manage its IT assets and monitor the security configurations and vulnerabilities for those assets; examining FDA’s process to correct weaknesses and to determine whether remedial action plans complied with federal guidance; and reviewing FDA’s implementation of incident response practices. To determine the reliability of FDA’s computer-processed data, we evaluated the materiality of the data to our audit objective and assessed the data by various means, including reviewing related documents, interviewing knowledgeable agency officials, and reviewing internal controls. Through a combination of methods, we concluded that the data were sufficiently reliable for the purposes of our work. We conducted this performance audit from February 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individuals named above, Gary Austin, West Coile, Larry Crosland, and Chris Warweg (Assistant Directors); Vernetta Marquis (Analyst in Charge); Alexander Anderegg, Angela Bell, Saar Dagani, Angel Ip, Lee McCracken, Constantine Papanastasiou, Dwayne Staten, and Michael Stevens made key contributions to this report.
FDA has a demanding responsibility of ensuring the safety, effectiveness, and quality of food, drugs, and other consumer products. In carrying out its mission, FDA relies extensively on information technology systems to receive, process, and maintain sensitive industry and public health data, including proprietary business information such as industry drug submissions and reports of adverse reactions. Accordingly, effective information security controls are essential to ensure that the agency's systems and information are adequately protected from inadvertent or deliberate misuse, improper modification, unauthorized disclosure, or destruction. GAO was asked to examine security controls over key FDA information systems. GAO assessed the extent to which FDA had effectively implemented information security controls to protect the confidentiality, integrity, and availability of its information on seven information systems selected for review. To do this, GAO reviewed security policies, procedures, reports, and other documents; examined the agency's network infrastructure; tested controls for the seven systems; and interviewed FDA personnel. Although the Food and Drug Administration (FDA), an agency of the Department of Health and Human Services (HHS), has taken steps to safeguard the seven systems GAO reviewed, a significant number of security control weaknesses jeopardize the confidentiality, integrity, and availability of its information and systems. The agency did not fully or consistently implement access controls, which are intended to prevent, limit, and detect unauthorized access to computing resources. Specifically, FDA did not always (1) adequately protect the boundaries of its network, (2) consistently identify and authenticate system users, (3) limit users' access to only what was required to perform their duties, (4) encrypt sensitive data, (5) consistently audit and monitor system activity, and (6) conduct physical security reviews of its facilities. FDA conducted background investigations for personnel in sensitive positions, but weaknesses existed in other controls, such as those intended to manage the configurations of security features on and control changes to hardware and software; plan for contingencies, including systems disruptions and their recovery; and protect media such as tapes, disks, and hard drives to ensure information on them was “sanitized” and could not be retrieved after they are disposed of. The table below shows the number of GAO-identified weaknesses and associated recommendations, by control area. These control weaknesses existed, in part, because FDA had not fully implemented an agency-wide information security program, as required under the Federal Information Security Modernization Act of 2014 and the Federal Information Security Management Act of 2002. For example, FDA did not ensure risk assessments for reviewed systems were comprehensive and addressed system threats, review or update security policies and procedures in a timely manner, complete system security plans for all reviewed systems or review them to ensure that the appropriate controls were selected, ensure that personnel with significant security responsibilities received training or that such training was effectively tracked, always test security controls effectively and at least annually, always ensure that identified security weaknesses were addressed in a timely manner, and fully implement procedures for responding to security incidents. Until FDA rectifies these weaknesses, the public health and proprietary business information it maintains in these seven systems will remain at an elevated and unnecessary risk of unauthorized access, use, disclosure, alteration, and loss. GAO is making 15 recommendations to FDA to fully implement its agency-wide information security program. In a separate report with limited distribution, GAO is recommending that FDA take 166 specific actions to resolve weaknesses in information security controls. HHS stated in comments on a draft of this report that FDA concurred with GAO's recommendations and has begun implementing several of them.
The mineral-rich DRC, Africa’s second-largest country, has been plagued by cycles of violence and instability. Since 1998, violent conflicts, poverty, and disease have killed more than 5.4 million people in the country, according to estimates by the International Rescue Committee. The DRC was colonized as a personal possession of Belgian King Leopold II in 1885 and administered by the Belgian government starting in 1907. It achieved independence from Belgium in 1960. For almost 30 years of the post-independence period, the DRC, then known as Zaire, was ruled by an authoritarian regime under Mobutu Sese Seko. Following the 1994 genocide in Rwanda and the establishment of a new government there, some perpetrators of the genocide and refugees fled to the neighboring Kivu provinces of eastern DRC. A rebellion began there in 1996, pitting the forces led by Laurent Kabila against the army of President Mobutu Sese Seko. Kabila’s forces, aided by Rwanda and Uganda, took the capital city of Kinshasa in 1997 and renamed the country the Democratic Republic of the Congo. See figure 1 for a map of the DRC’s provinces and neighboring countries. A period of civil war among rival rebel groups ensued. In 2001 Laurent Kabila was assassinated and leadership shifted to his son Joseph Kabila, while the civil war continued. Starting in 1999 the UN Security Council authorized peacekeeping operations in the DRC which have been operating as the United Nations Organization Stabilization Mission in the Democratic Republic of the Congo (MONUSCO). Initially, the operation’s focus was on the ceasefire and disengagement of forces and maintenance of liaison with all parties involved with the civil war but then expanded to include the effective protection of civilians, humanitarian personnel and human rights defenders under imminent threat of physical violence. The presence of illegal armed groups, such as M23, has continued to be an issue that MONUSCO has monitored in recent years. In November 2012, M23 occupied the city of Goma, a provincial capital in eastern DRC in the North Kivu province, and other cities in eastern DRC and clashed with the Congolese national army. During this time, the UN reported cases of sexual violence perpetrated by armed groups and members of the Congolese national army against women and children. While M23 eventually withdrew from the cities, the group’s presence in the region continues. In February 2013, the UN reported that eastern DRC continues to be plagued by recurrent waves of conflict, chronic humanitarian crises, and serious human rights violations, including sexual and gender-based violence. The report added that contributing factors to the cycles of violence have been the continuing presence of Congolese and foreign armed groups taking advantage of security vacuums in the eastern part of the country, the illegal exploitation of resources, interference by neighboring countries, and the weak capacity of the national army and police to effectively protect civilians and the national territory and ensure law and order. In March 2013, the UN Secretary- General appointed a Special Envoy to the Great Lakes Region of Africa to support the implementation of the 11-nation “Peace, Security and Cooperation Framework for the Democratic Republic of the Congo and the Region” adopted in February 2013. According to the UN, the agreement seeks to end the recurring cycle of conflicts and crisis in the eastern DRC and to build peace. Additionally, on March 28, 2013, the UN Security Council authorized the deployment of an intervention brigade within the current peacekeeping operations in DRC to address imminent threats to peace and security. The objectives of the new force based in North Kivu province are to neutralize armed groups, reduce the threat they pose to state authority and civilian security, and make space for stabilization activities. Congress has focused on issues related to the DRC for almost a decade. In 2006, Congress passed the Democratic Republic of Congo Relief, Security, and Democracy Promotion Act of 2006. The act stated that it is the policy of the United States, among other things, to engage with governments working for peace and security throughout the DRC and hold accountable individuals, entities, and countries working to destabilize the government. In July 2010, Congress included several provisions in section 1502 of the Dodd-Frank Act related to conflict minerals in the DRC and adjoining countries. Specifically, section 1502(a) of the Act states that “it is the sense of Congress that the exploitation and trade of conflict minerals originating in the is helping to finance conflict characterized by extreme levels of violence in the eastern Democratic Republic of the Congo, particularly sexual- and gender-based violence, and contributing to an emergency humanitarian situation therein,” warranting the provisions of Section 1502(b) of the Act. Section 1502(b) requires SEC, in consultation with State, to promulgate disclosure and reporting regulations regarding the use of conflict minerals from the DRC and adjoining countries. In November 2011, State and USAID, in collaboration with NGOs, industry, and other governments, launched the Public-Private Alliance for Responsible Minerals Trade (PPA) to support responsible supply chain solutions regarding conflict minerals from the DRC and neighboring countries. The PPA supports pilot programs, with the ultimate goal of producing scalable, self-sustaining systems, to demonstrate a fully traced and validated conflict-mineral supply chain in a way that is credible to companies, civil society, and government. According to USAID, in addition to the PPA, the U.S. government’s contribution to the Responsible Minerals Trade Program in the DRC region has amounted to almost $19 million and includes activities focused on the protection of artisanal mining communities, institutional and human capacity building for responsible minerals trade, and capacity building in mining sector security, among other issues. The SEC Commissioners adopted the final conflict minerals rule on August 22, 2012, after a number of delays during the drafting process. SEC reported that during its rule-making process it received more than 400 letters commenting on the draft rule. As adopted, the final rule applies to any issuer that files reports with SEC under Section 13(a) or Section 15(d) of the Securities Exchange Act of 1934 (Securities Exchange Act) and uses conflict minerals that are necessary to the functionality or production of a product manufactured or contracted by that issuer to be manufactured. According to SEC, issuers that have a reporting obligation are domestic and foreign companies that offer shares publicly and file forms 10-K, 20-F, or 40-F with SEC. For the purposes of our report, we refer to those issuing companies affected by the rule as “SEC-reporting companies under the rule.” (See app. II for more information on the steps a company needs to take to fulfill its reporting requirements.) Under the rule, such companies must file a disclosure report and conduct a “reasonable country of origin inquiry” to determine Companies that whether they must also file a conflict minerals report.are required to file a conflict minerals report must exercise due diligence on the source and chain of custody of their conflict minerals. The due diligence measures used by companies must conform to a nationally or internationally recognized due diligence framework, such as the due diligence guidance approved by OECD. If a company determines that its products are “DRC conflict-free” because they may have originated from the covered countries but did not finance or benefit armed groups, then the company must obtain an independent private sector audit and provide certification that it conducted an audit. If a company’s products have not been found to be “DRC conflict-free,” then the company must provide additional information in its conflict minerals report. For a temporary period—4 years for smaller reporting companies or 2 years for all other reporting companies—if a company is unable to determine whether the minerals in its products originated in the DRC or the adjoining countries or financed or benefited armed groups in those countries, then those products are considered “DRC conflict undeterminable” and no audit is Under the rule, all companies will need to file their first required.disclosure report to SEC on May 31, 2014, which covers the 2013 calendar year, and on May 31 annually thereafter. Figure 2 shows the reporting time frames for SEC-reporting companies under the rule. In October 2012, the U.S. Chamber of Commerce, the National Association of Manufacturers, and the Business Roundtable filed a lawsuit against SEC regarding the final conflict minerals rule. In their petition, the two industry associations asked that the rule “be modified or set aside in whole or in part.” The petitioners have asked the court to review, among other things, whether SEC’s economic analysis is inadequate and whether SEC’s interpretations of certain key terms in section 1502 of the Act are consistent with congressional intent. The four conflict minerals covered by section 1502(b) of the Dodd-Frank Act are mined in various locations around the world. For example, tin is predominantly mined in China, Indonesia, Peru, and Bolivia, as well as in the DRC, while tantalum is reportedly predominantly mined in areas such as Australia, Brazil, and Canada. From 2006 through 2011, the majority of tungsten production—reportedly 77 to 87 percent of global production— was mined in China. Gold, however, is mined in many different countries, including the DRC. Our review of United States Geological Survey data on tantalum, tin, tungsten, and gold mined in the DRC showed that about 12 percent of the global tantalum supply and less than 1 percent of the global tungsten supply was mined in the DRC in 2011. About 3 percent of the global tin supply, and less than 1 percent of the global gold supply, was mined in the DRC in 2010. As we reported in our 2012 report, various industries, particularly in manufacturing, use these minerals in a wide variety of products and in varying amounts. For example, many industries use tin in the form of tin solder, which is used to join metal pieces together.in food packaging, in steel coatings on automobile parts, and in some plastics. According to industry association and company representatives, the majority of tantalum is used to manufacture tantalum capacitors, which enable energy storage in electronic products such as cell phones According to company representatives, tin is also found and computers.and cutting tools, and other industrial manufacturing tools. It is also the primary component of filaments in light bulbs. In addition to its use as currency and in jewelry, gold is also used by other industries, such as the electronics industry. Tungsten is used in automobile manufacturing, drill bits A company’s supply chain for products containing tin, tantalum, tungsten, and gold can be complex and can vary considerably in the way it operates, according to industry association and company representatives. Generally, however, the supply chain for companies using conflict minerals begins at the mine site, where tin, tantalum, and tungsten ore are extracted from the ground using mechanized or artisanal mining techniques. supply chain for all four conflict minerals. SEC’s adoption of the final conflict minerals rule on August 22, 2012, has raised companies’ awareness regarding conflict minerals and the due diligence necessary to identify whether conflict minerals may have benefited armed groups. Specifically, officials representing industry associations stated that the final conflict minerals rule has acted as an impetus for some of their members to start thinking about whether the rule impacts them and some have also started collecting information to comply with the rule. Officials stated that stakeholder-developed initiatives, such as in-region and global sourcing initiatives, may increase companies’ assurance that conflict minerals they are using are not benefiting armed groups in the DRC and neighboring countries. However, constraining factors such as the lack of security, lack of infrastructure, and capacity constraints could undermine companies’ ability to ensure conflict-free sourcing from the region. Since SEC issued the final conflict minerals rule pursuant to the Dodd Frank Act, companies have become more aware of the issues surrounding conflict minerals and have started to consider the source of materials used in products, given the requirement in the final rule for a company that uses tin, tantalum, tungsten, or gold to exercise due diligence on the source and chain of custody of its conflict minerals, if there is reason to believe that they may have originated in the DRC or an adjoining country. According to some industry officials we interviewed, the final rule has helped resolve some uncertainties, such as the breadth of the industries covered, that existed before the promulgation of the rule. Numerous industry officials and representatives from international organizations and NGOs we interviewed have indicated that the creation and promulgation of the SEC rule has increased visibility into the issue of conflict minerals and raised awareness of the due diligence process, particularly for those companies that are not required to report under the Specifically, rule but that may still be impacted indirectly by the rule.officials of industry associations representing member companies that use tin, tantalum, tungsten, or gold in their products stated that many companies are aware of the SEC rule, especially the larger companies that may file a disclosure report with SEC, and are working to start complying with the rule. Some smaller companies, which may not be required to report under the rule, may not be as aware of or familiar with the rule but are receiving information from industry associations on how the rule may impact them. For example, some officials from industry associations stated that they were putting together guidance documents that break down the SEC rule and had also sent questions to SEC seeking to clarify points in the rule. Agency officials stated that the SEC rule has raised visibility globally of conflict minerals. For example, State reported in February 2013 that the issuance of the SEC rule was a vital step in establishing a clear and harmonized global framework for responsible minerals trade from the DRC region. Furthermore, State indicated that the SEC rule has also shaped and influenced initiatives to create a conflict-free supply chain by the International Conference of the Great Lakes Region (ICGLR) and the governments of the DRC and Rwanda. We provide a more detailed discussion later in this report on the ways in which companies required to report under the rule, in order to comply, are interacting with companies not required to report under the rule. Some agency officials we interviewed stated that stakeholder-developed initiatives focused on sourcing of minerals may enhance companies’ ability to achieve the SEC rule’s desired outcome of denying armed groups in the DRC benefits from conflict minerals. As mentioned in our 2012 report, stakeholder-developed initiatives—which include the development of guidance documents, audit protocols, and in-region sourcing—support efforts by companies reporting to SEC under the rule to (1) conduct due diligence of their conflict minerals supply chain, (2) identify the source of conflict minerals within their supply chain, and (3) responsibly source conflict minerals. These initiatives can be classified as in-region or global, and some are now being expanded. In-region sourcing initiatives, as we reported in 2012, may support responsible sourcing of conflict minerals from Central Africa and the identification of specific mines of origin for those minerals. Regional sourcing initiatives in the DRC and neighboring countries focus on tracing minerals from the mine to the mineral smelter or refiner by supporting a bagging and tagging program or some type of traceability scheme. Examples of such initiatives include the ITRI Tin Supply Chain Initiative (iTSCi) and the Conflict-Free Tin Initiative (CFTI). (See app. III for more detailed information on these and selected other in-region sourcing initiatives). The iTSCi initiative was developed by a tin industry association known as ITRI. The initiative supports responsible sourcing of tin, tantalum, and tungsten from Central Africa and was launched in Rwanda in December 2010 and in the Katanga province of the DRC in March 2011. iTSCi expanded its activities in the Maniema province of the DRC in December 2012. We reported in 2012 that iTSCi is a traceability and due diligence program that creates auditable and verifiable chains of custody for tin, tantalum, and tungsten through (1) tagging and bagging of materials and the collection of tagging data and (2) regular incident reporting and continuous monitoring of mines and companies participating in the program. In October 2012, the Dutch government, with industry partners such as iTSCi, started the Conflict-Free Tin Initiative focused on conflict-free tin sourcing from South Kivu in the DRC, a region that is prone to insecurity and violence by illegal armed groups. This initiative is a traceability and due diligence mechanism that brings partners along the supply chain together, from mine to smelter to end- user. This also includes consumers as well as the DRC government and civil society and uses the OECD due diligence guidance. According to an implementer, the progress of the initiative will depend on how the security situation in South Kivu develops. While the in-region sourcing initiatives have focused on tin, tantalum, and tungsten to date, one of the most recent in-region initiatives in the DRC is through the Public-Private Alliance for Responsible Minerals Trade (PPA) and is focused on a pilot gold traceability scheme. In fall of 2012, Partnership Africa Canada began work on establishing an in-region gold traceability project partially funded by PPA. According to information from PPA, the project aims to create a traceable conflict-free mineral chain for artisanal gold from the eastern DRC, in the Orientale province, thus demonstrating the feasibility of creating artisanal gold chains with full traceability from mine site to gold refiner. According to industry and agency officials, in-region sourcing programs can provide better economic incentive for miners to sell minerals that do not benefit armed groups. For example, iTSCi reported in February 2013 that the tin initiative in South Kivu had led to a number of immediate benefits for the local population, which depends on mining as its source of income. Specifically, the price paid to the miners for conflict-free minerals mined at the site had more than doubled. iTSCi further reported that the additional income had allowed the mining cooperatives to invest in basic equipment such as electricity generators and to improve productivity and working conditions. Additionally, some agency officials stated that in- region initiatives can help develop capacity in the DRC. We reported in 2012 that global sourcing initiatives may minimize the risk of minerals that have been exploited by illegal armed groups from entering the supply chain and support companies’ efforts to identify the source of the conflict minerals across the supply chain around the world. (See app. III for more detailed information on selected global sourcing initiatives.) One such global initiative is the Conflict-Free Smelter Program, co-developed by the Global e-Sustainability Initiative (GeSI) and the Electronics Industry Citizenship Coalition (EICC). The Conflict- Free Smelter Program is a voluntary initiative in which an independent third party audits smelters’ procurement activities—–among other activities—–and determines if the smelters demonstrated that the minerals they processed originated from conflict-free sources. Companies that can trace their conflict minerals supply chain back to compliant smelters or refiners can claim that the minerals in their products are from a smelter whose processes reasonably assure conflict-free production. Industry experts we interviewed explained that if initiatives such as the Conflict-Free Smelter Program can result in smelters refining minerals that did not benefit armed groups, then companies can comply better with the SEC rule requirements and have more confidence in their supply chain sources. Specifically, one expert stated that companies that are conducting due diligence under the rule would not have to audit the smelter themselves, if the smelter has already been audited under the Conflict-Free Smelter Program. Agency officials, both in Washington, D.C., and in the DRC, reported that the Conflict Free Smelter Program seems to be a positive initiative since the more smelters are certified as conflict-free, the more beneficial this will be for companies reporting under the SEC rule. Overall, agency officials we interviewed stated that existing initiatives and traceability schemes on the ground in the DRC and neighboring countries have been yielding benefits of producing conflict-free minerals; however, according to these officials, more progress could be made in the responsible sourcing of conflict minerals from the region. Some industry experts also indicated that while progress has been made and the initiation or expansion of in-region sourcing initiatives is possible, factors such as the ones described below remain a concern. Some agency officials as well as representatives we interviewed from NGOs, industry, and international organizations cited lack of security, inadequate infrastructure, and capacity constraints as factors that could affect the ability to expand on efforts to achieve conflict-free sourcing of minerals from the eastern DRC and thereby potentially contribute to armed groups benefiting from the conflict minerals trade. We also cited these same factors in our 2010 report and pointed out that these factors posed challenges to tracking the mines of origin for minerals artisanally mined in eastern DRC. While officials we spoke to for this report discussed these factors in the context of the SEC rule, these factors are pre-existing regional challenges that pre-date both the Dodd Frank Act and the SEC conflict minerals rule. Officials cited the lack of security, including weak governance, as a factor that could impact responsible sourcing from the DRC. The UN reported that the DRC government has been unable to exercise authority in eastern DRC, which has become more evident as illegal armed groups clashed in the Kivu provinces late in 2012. State also reported that lack of security has prevented the export of conflict-free minerals from certain areas in eastern DRC. Industry and NGO officials who work on the ground in the DRC pointed out that the threat from illegal armed groups poses a challenge to the conflict-free minerals initiatives operating in eastern DRC and the neighboring provinces. Although the mining sites are constantly monitored, the monitoring activities could be suspended at any time as the security situation evolves. For example, an NGO reported that tagging was suspended for days in July 2012 at an iTSCi site in the Katanga province because of the movement of armed groups in the vicinity of the mine sites; however, there were no reported cases of armed groups successfully taking control of the sites or directly exploiting minerals to fund activities. In-region sourcing initiatives have operated in areas that have been vetted by various stakeholders and have the support of government and civil society actors. According to the UN Group of Experts on the Democratic Republic of the Congo (UNGoE), the security situation at tin, tantalum, and tungsten mining sites has improved and the trade of these minerals has become a much less important source of financing for armed groups. However, the UNGoE reported a “genuine risk that military actors would move their rackets to mining activities that were not closely supervised.” They further reported that the gold trade is linked to armed groups and criminal networks in the Congolese armed forces. According to the UNGoE, lack of security at gold mining sites throughout eastern DRC remained widespread. Agency officials emphasized that armed groups still existed in the DRC despite the initiatives in place and would seek control of any significant revenue-producing activity in the region. Some industry officials cited concerns about sourcing from the DRC, even through the in-region sourcing initiatives, because of the potential impact on brand reputation and financial risk. For example, a representative of a smelter indicated that if the company purchased minerals from a mine that is part of a traceability scheme that is deemed conflict-free but then illegal armed groups infiltrated and compromised the mine in the future, the company would not be able to say with certainty that the minerals it had purchased were conflict-free. Officials cited limited infrastructure as a factor that could affect the creation or expansion of in-region sourcing initiatives. Officials from UNGoE and industry representatives we interviewed noted a lack of infrastructure in place that would enable companies to set up or expand operations in the DRC. Limited transportation and poor roads in eastern DRC also make it difficult to get to mine sites. For example, an agency official in the DRC commented that mines may be a day’s walk from a main road. Also, an NGO reported that in selecting a potential pilot site for a traceability scheme, accessibility to the site by road was a key criterion and would involve using off-road vehicles due to the significant deterioration of roads leading to the mine. Moreover, according to an NGO representative, the remoteness of mines also makes it difficult for DRC mine officials to validate mines and ensure that the mines have not been compromised by armed groups. Furthermore, State officials indicated that the lack of infrastructure prevents trade initiatives from developing economies of scale and expanding. Officials we interviewed cited the lack of technical, economic, and political capacity as another factor that may affect the creation or expansion of in- region sourcing initiatives focused on responsible sourcing in the DRC and neighboring countries. In 2013, the OECD reported that while the understanding of responsible sourcing is “high for those actors in the DRC and Rwanda who have participated in such initiatives, the same is not true for state agents” in the country. The OECD report also pointed out that Ugandan and Burundian government officials and other entities lack technical understanding of due diligence requirements. Some NGO officials stated that lack of capacity can impact the due diligence process in the supply chain, especially if the numbers of trained mining agents is insufficient. For example, some agency officials and an NGO reported that the DRC does not have enough mine agents to certify the mines, of which there may be over 2,000 in eastern DRC alone, or even to negotiate and manage mining contracts. Moreover, an NGO official stated that mines need to be reinspected every 6 to 12 months in order to ensure proper due diligence in accordance with OECD and ICGLR guidance; however, the NGO official stated that the DRC government does not have the capacity to inspect at such frequency. Some agency officials and officials we interviewed from industry, NGOs, and international organizations also commented that the DRC government lacks capacity to mitigate corruption and smuggling. The lack of capacity can impact due diligence and can contribute to illegal minerals trade and cross-border smuggling. For example, the UN reported that illegal trade of minerals undermines the exercise of due diligence in the DRC and affects the credibility of due diligence-based certification and traceability systems. According to some industry experts, mining agents may not be properly compensated, due to the lack of governance in eastern DRC, and may look for other ways to earn money, which could involve colluding with illegal armed groups. With regard to smuggling, the OECD reported that as long as there are no traceability or certification schemes in place that cover the whole region, and most notably, the Kivu provinces, Uganda, and Burundi, smuggling and contamination of clean materials will continue to pose a threat to formalization of the artisanal mining sector and due diligence initiatives. According to a 2012 UNGoE report, several tons of gold worth hundreds of millions of dollars are smuggled from the eastern DRC through neighboring countries, where it is ultimately smelted and sold to jewelers in markets, such as the United Arab Emirates. Representatives from some industry associations that we interviewed stated that armed groups and criminal elements have shifted efforts to gold mines because it is relatively easy to smuggle gold because of its size. Furthermore, gold’s high value in the market makes it more viable for smuggling than tin, tantalum, and tungsten. Even companies that are not required to file disclosures under SEC’s conflict minerals rule will likely be affected by the rule. These companies may supply components or parts that contain conflict minerals to companies reporting to SEC under the rule and may be asked by such companies to provide information specifying the origin of the minerals. Aside from the supply chain relationship, while information is publicly available about some smelters and refiners, there is little aggregated information available about companies that do not report to SEC under the rule but may trade in conflict minerals. Companies that are not required to report to SEC under the rule may supply products that contain conflict minerals to SEC-reporting companies under the rule. SEC relied on estimates provided by a commentator indicating that 278,000 suppliers—most of which would be companies that would not report to SEC under the rule—could be indirectly impacted by the rule. Moreover, the release contains an estimate that each of the nearly 6,000 companies that could be directly impacted by the rule has roughly 1,000 first-tier suppliers, on average. These suppliers, including first-tier suppliers, could provide products that contain conflict minerals to companies required to report to SEC under the rule. Examples of these products include tin solder for joining metal, tantalum capacitors for storing energy in cellular phones, tungsten carbide for hardened cutting tools, or gold plating for wires to increase durability and resistance to corrosion. The first-tier supplier has a direct commercial relationship with the original equipment manufacturer, meaning the first-tier supplier sells materials or component parts, which have been aggregated by suppliers throughout the supply chain, to the original equipment manufacturer for final assembly. According to an industry official, in general, component parts manufacturers construct individual parts—such as capacitors, engine parts, circuit boards, and other components—and assemble them into more complex components. Estimate of the Number of Companies Required to Report to SEC under the Rule SEC estimates that 5,994 reporting issuers (primarily companies that issue stock publicly and are required to report to SEC) will be affected by the rule and will need to determine if their products contain conflict minerals. According to SEC, reporting issuers are domestic and foreign companies that file forms 10-K, 20-F, and 40-F under the Securities Exchange Act. According to SEC and industry officials, these companies vary in size and revenue, but in general, tend to be larger, mature companies that can have a diverse product line; their revenues can range from the millions of dollars to hundreds of billions; they are domestic and foreign companies; and they may have operations in several countries. According to SEC and industry officials, some reporting issuers may sell products to consumers. Using an electronics company as a model, processed metals move through several suppliers that manufacture component parts after the smelter—first to circuit board and computer chip manufacturers, then to cellular phone and other electronics manufacturers, and finally to the brand-name electronics company, which is the original equipment manufacturer that manufactures products recognizable to the consumer, such as cellular phones, tablets, and laptop computers. Beyond the first- tier supplier, there are tier 2-, 3-, 4-, or higher-tiered suppliers that, beginning with the raw materials from the smelter or refiner, manufacture component parts that are assembled into more complex component parts as they move from higher- to lower-tiered suppliers in the supply chain, to the first-tier supplier, and finally to the original equipment manufacturer.See figure 4 for a simplified version of the supply chain, and the tiered structure of suppliers. While many companies will likely be directly or indirectly impacted by the rule, some companies that use conflict minerals may not be, partly because (1) the companies are not issuers that are required to file with SEC under the Securities Exchange Act, and (2) these same companies potentially do not sell components or parts to a company that will be required to report to SEC under the rule. Industry and consulting firm representatives have differing views on the number of companies that purchase conflict minerals from the DRC and adjoining countries but may not be impacted by the rule. Suppliers that provide products that may contain conflict minerals to companies required to report to SEC under the rule may provide information on the minerals’ origins to those reporting companies that request it. The SEC release does not specify the steps and outcomes for the reasonable country of origin inquiry, and indicates that such a determination depends on each issuer’s facts and circumstances. However, in conducting a country of origin inquiry, issuers may inquire of their suppliers the origin of any conflict minerals in the products. According to the release, the issuer’s inquiry must be reasonably designed to determine whether any of its conflict minerals originated in the DRC and adjoining countries, and must be performed in good faith. If, after this inquiry, the issuer has a reason to believe that its conflict minerals may have originated in the DRC and adjoining countries, the issuer proceeds to exercising due diligence. Industry associations such as the EICC and GeSI have created templates for companies to use when contacting suppliers to inquire about the types and origins of conflict minerals in a given product. For example, companies required to report under the rule could submit the inquiries to their first-tier suppliers. Those suppliers could either provide the reporting company with sufficient information or initiate the inquiry process up the supply chain, such as by distributing the inquiries to suppliers at the next tier—tier 2 suppliers. The tier 2 suppliers could inquire up the supply chain to additional suppliers, until the inquiries arrive at the smelter. Smelters then could provide the suppliers with information about the origin of the conflict minerals. Figure 5 illustrates the flow of information up the supply chain. As discussed earlier, smelters have various means to preclude untraced minerals from entering their supply, such as participation in the iTSCi initiative and the Conflict Free Smelters Program. According to smelting industry representatives, these initiatives and certifications have reduced the burden of responding to the multiple amounts of inquiries many smelters have already received from suppliers. Officials from consulting firms and industry associations that we spoke with told us that many companies that will respond under the rule have started contacting their first-tier suppliers and providing them with country of origin inquiries. According to these officials, several of these companies that have submitted inquiries to their suppliers have experienced challenges, which include identifying suppliers beyond the first-tier suppliers, because for original equipment manufacturers, suppliers beyond the first tier are less visible. As discussed earlier, original equipment manufacturers purchase component parts primarily from their first-tier suppliers and do not have direct commercial relationships with suppliers in higher tiers of the supply chain. According to industry representatives and agency officials, these challenges may impact how companies file under the rule. For example, as previously discussed, the SEC rule allows companies to disclose their products as “DRC conflict undeterminable.” This provision allows companies to state that the source of the conflict minerals in their products, and the likelihood that the conflict minerals benefited or financed armed groups from the DRC and adjoining countries, could not be determined after having conducted due diligence to obtain that information from their suppliers. For the reporting period beginning January 1, 2013, companies may use this provision for 4 years for smaller reporting companies or 2 years for all other reporting companies. Although the number of companies required to report under the rule that may utilize the “DRC conflict undeterminable” provision is unknown, SEC officials and representatives of industry associations and consulting firms anticipate that many companies required to report under the rule will utilize the provision based on the results of their due diligence efforts. Representatives from industry and consulting firms that we interviewed stated that the purchasing power of issuing companies under the rule may influence their suppliers to provide information on the source of any conflict minerals in their products when requested. According to an industry representative, since companies that report to SEC under the rule tend to be large, mature corporations with great purchasing power in their respective industries, it would be difficult for suppliers to ignore their request for information on the origin of conflict minerals in products the suppliers provide to them. For example, jewelry industry representatives told us that they have advised their members, which are primarily small, independent jewelry companies not required to report to SEC under the rule, to respond to any requests from customers seeking information on the origin of conflict minerals in products they supply, because the risk of not responding could result in a loss of business for those companies. Some information is publicly available about smelters and refiners, and their involvement in the conflict minerals supply chain. According to SEC officials, while smelters and refiners are not exempted from the SEC rule, most of these suppliers will likely not be required to report to SEC under the rule because of their filing status. Smelter and refiners are considered the choke-point of the conflict minerals supply chain, as previously discussed, and comprise a small portion of the overall number of suppliers in the conflict minerals supply chain that may be impacted by the SEC rule. While it is not possible to determine the universe of suppliers that would not be required to report under the rule, smelters and refiners are a more identifiable population for which there is some aggregated information, such as the types of conflict minerals they use, We found the following information on smelters and and their location.refiners: Smelters and refiners constitute a small but important portion of suppliers that likely will not file a conflict minerals report under the SEC rule. Organizations have estimated the number of smelters and refiners around the world to be nearly 500; however, the actual number of smelters and refiners of conflict minerals is unknown. We aggregated publicly available information on smelters and refiners from lists compiled by the EICC and GeSI, the OECD, and the London Bullion Market Association (LBMA), which provided information on 278 smelters and refiners. As we have previously discussed, roughly 278,000 suppliers could be affected by the rule, based on the estimate provided to SEC. Of the 278 smelters of tin, tantalum, tungsten, and refiners of gold that we were able to identify, the majority (271) of these companies would likely not be required to report under the rule. Over half of the smelters and refiners of the conflict minerals we identified were located in three countries. Of the 278 smelters and refiners of tin, tantalum, tungsten, and gold that we were able to identify, more than half (156) were located in three countries: China (82), Japan (39), and Indonesia (35). According to industry representatives, participation in due diligence efforts of smelters and refiners from these countries, particularly China and Indonesia, is critical in assisting companies with fulfilling the reporting requirements of the SEC rule. Several organizations, including an NGO, and representatives from government and industry, are conducting outreach to smelters to provide information on the SEC rule in an effort to increase participation from smelters and refiners from these countries. For more information on the location of smelters and refiners we identified in our analysis, see figure 6. Many smelters and refiners of conflict minerals in our analysis processed tin. Of the 278 smelters and refiners in our analysis, we were able to identify 113 that processed tin, followed by gold (78), tungsten (54), and tantalum (33). Furthermore, over half (64 of 113) of the tin smelters in our analysis were located in China (30) or Indonesia (34). In addition, over 67 percent of global production comes from mines in China and Indonesia, according to U.S. Geological Survey data. Tin and its derivatives have wide applications and are used in manufacturing a variety of products, including tin soldering for joining pipes, coatings for steel containers, and a wide range of tin chemical applications, according to the U.S. Geological Survey. According to industry and consulting firm representatives, around 12 to 15 smelters process nearly 80 percent of the world’s tin. Most smelters and refiners in our analysis did not have a conflict minerals policy publicly available. Of the 278 smelters and refiners we were able to identify, 63 had a conflict minerals policy publicly available on their website. Of the 63 smelters with a conflict minerals policy publicly available, 26 smelters had successfully completed a Conflict Free Smelter Program audit and were designated as “conflict- free” by the EICC and GeSI, while several had reportedly followed some sort of due diligence, such as the OECD Due Diligence Guidance or the LMBA Responsible Gold Guidance. Other smelters in our analysis had posted policies on their website stating that the company only sources conflict minerals outside of the conflict areas of the DRC and adjoining countries. We were unable to identify a website for 86 of the 278 smelters in our analysis, and 129 of the 278 smelters had no conflict minerals policy publicly available on their website. Some information is publicly available on companies that use conflict minerals but are not required to report under the rule, as in the case of many smelters and refiners we were able to identify. However, data for the universe of these companies are limited. Specifically, based on our analysis, aggregated data on the types of conflict minerals in the products manufactured by these companies as well as information on how such companies source their conflict minerals are not available, except for a few companies. For example, several of these companies provide information publicly about their continued participation in initiatives that source conflict minerals from the DRC, and have agreed to purchase conflict minerals, such as tin and tantalum, from closely monitored sources through initiatives such as iTSCi and the Solutions for Hope, as However, according to agency and international previously mentioned. organization officials, in some instances buyers from small firms, mainly from East Asia, are on the ground in the DRC and adjoining countries, and continue to purchase untraced minerals as well as minerals that have been smuggled out of the DRC into adjoining countries. In addition, according to an industry representative, it may be difficult to identify information on these companies because they tend to be small and serve very specific markets. Solutions for Hope is a “closed-pipeline” initiative to trace the flow of tantalum from the mine to the end-use company. Since our 2012 report, one population-based survey providing data on the rate of sexual violence has been published in Uganda, and one is under way in the DRC; during the same period, no similar surveys have been conducted in Rwanda or Burundi. We also found some additional case file data available on sexual violence for all four countries. However, as we reported in 2011, case file data on sexual violence are not suitable for estimating a rate of sexual violence. We found that one new population-based survey on the rate of sexual violence has been conducted since our 2012 report—the 2011 Uganda Demographic and Health Survey (DHS), published in August 2012. According to the survey, “28 percent of women and 9 percent of men age 15-49 report that they have experienced sexual violence at least once in their lifetime.” These national estimates are based on a random sample. Since we first reported on sexual violence in our 2011 report, we have identified six other population-based surveys that provided data on the rate of sexual violence in these countries. Surveys Are More Appropriate for Estimating a Rate of Sexual Violence In our 2011 report on sexual violence, we discussed two sources of data on sexual violence in eastern DRC and neighboring countries—population-based surveys and case files—and concluded that population- based surveys are more appropriate for estimating a rate of sexual violence. Case file data have shortcomings and biases that significantly limit their utility for estimating the rate of sexual violence. For example, case file data are not generated from a random sample; are reliant on victims seeking services to be counted, although some victims may lack access to service; and allow for the potential double counting of the same sexual violence incident in the case file data collected. In reviewing whether there had been updates to any of the previous surveys conducted, we found that the authors of the McGill study, a population-based survey conducted in eastern DRC that was highlighted in our 2011 report, had no plans to conduct a follow-up survey. We found that fieldwork for a DHS for the DRC is expected to launch in August 2013, with data expected around September 2014. We also found a team of two organizations that released estimates in 2010 based on survey data of sexual violence in Rwanda. Sonke Gender Justice Network and Promundo-US conducted a probability cluster sample in 2010 as part of its IMAGES survey and found that “57 percent of women reported having experienced gender-based violence committed by a partner” and “17 percent of men experienced sexual violence when they were a child.” However, this survey was not weighted to reflect unequal probabilities of selection, and it does not contain confidence intervals. Therefore, we are not able to assess the accuracy or precision of the estimates. Following up on our 2011 and 2012 reports, we asked U.S. and UN agencies as well as researchers and NGOs if they had any updated case file data. In April 2013, State submitted its annual country reports on human rights practices to Congress, which provided case file information pertaining to sexual violence in the DRC and neighboring countries. The 2012 Department of State Human Rights Reports reported the following: In DRC, the Ministry of Gender reported 10,037 cases of sexual- and gender-based violence in 2011 in eastern DRC. In Rwanda, prosecutors reported that they investigated 351 cases of rape in 2012. Of those 351 cases, 109 were filed in courts, 143 were dropped, and 99 were pending investigation. In Uganda, 520 cases of rape were reported in 2011, of which 269 were tried. In Burundi, Centre Seruka, a clinic for rape victims averaged 121 rape cases per month between January and September 2012. Various UN entities reported other case file data. In March 2013, the UN Secretary-General reported that 764 people had become victims of sexual violence in eastern DRC from December 2011 and through November 2012. In May 2013, the UN Joint Human Rights Office reported that an armed group committed 135 cases of sexual violence from November 20, 2012 through November 30, 2012. Furthermore, in December 2012, the UN Office for the Coordination of Humanitarian Affairs reported 70 rapes in Minova, a town in eastern DRC, from November 30 through December 4, 2012. Because case file data are not aggregated across various sources and the extent to which various reports overlap is unclear, it is difficult to obtain complete data on case files or even a sense of magnitude. One shortcoming of both case file data and surveys is that time frames, locales, and definitions of sexual violence are not consistent across data collection operations. As we reported in 2011, case file data on sexual violence are not suitable for estimating a rate of sexual violence because case file data are not based on a random sample and the results of analyzing these data are not generalizable. We provided a draft of this report to SEC, State, and USAID, for their review and comment. SEC, State, and USAID provided technical comments, which we incorporated in this report as appropriate. We also provided relevant portions of the draft of this report to relevant external stakeholders for their technical comment. We received technical comments from some of these stakeholders, which we incorporated throughout this report as appropriate. We are sending copies of this report to appropriate congressional committees. The report is also available at no charge on the GAO website at http://www.gao.gov/. If you or your staffs have any questions about this report, please contact me at (202) 512-4802 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To describe factors that may impact whether the Securities and Exchange Commission’s (SEC) conflict minerals rule denies armed groups in the Democratic Republic of the Congo (DRC) and adjoining countries benefits from conflict minerals, we interviewed officials from SEC, the Department of State (State), and the United States Agency for International Development (USAID), as well as representatives from international organizations, nongovernmental organizations (NGO), industry associations, consulting firms, and smelters and refiners of tin, tantalum, tungsten, and gold to get their views on the final SEC rule as well as any impacting factors. We chose the experts and stakeholders we interviewed to capture a range of perspectives about the types of minerals traded and because we had established contacts with these entities on our last review. In addition, some of the stakeholders we talked to have been working on the ground in the DRC. These experts and stakeholders constitute a nongeneralizable sample. The information gathered cannot be generalized and cannot be used to infer views of other experts or stakeholders cognizant of conflict minerals issues. We reviewed Section 1502 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Pub. L. No. 111-203); reports and other documents from relevant U.S. agencies, such as SEC’s final conflict minerals rule, press releases, and statements; reports issued by the UN Group of Experts on the Democratic Republic of the Congo (UNGoE) and the Organisation for Economic Co- operation and Development (OECD); as well as documents and reports from industry associations and NGOs. We did not travel to the DRC or speak with government officials in the DRC but obtained perspectives on issues from some stakeholders who operate in the DRC. To identify and describe available information about entities that use conflict minerals and do not report to SEC under the rule, we interviewed officials from SEC, State, and USAID, as well as representatives from international organizations, NGOs, industry associations, consulting firms, and smelters and refiners of tin, tantalum, tungsten, and gold to get their views on the extent to which information is publicly available on companies that are not required to report under the rule that may use conflict minerals in their products, and the source of the conflict minerals. We reviewed and analyzed reports and other documents from organizations such as the OECD, London Bullion Market Association (LBMA), and the Electronics Industry Citizenship Coalition and the Global e-Sustainability Initiative (EICC and GeSI), as well as documents and reports from industry associations and NGOs. In addition, we conducted searches in the Nexis database using selected Standard Industrial Classification (SIC) codes listed under the Manufacturing division. Overall, there were 20 subcategories under the Manufacturing division of SIC codes, which include subcategories such as Tobacco Products; Paper and Allied Products; and Electronic and Other Electrical Equipment and Components, Except Computer Equipment. We selected SIC codes under the Manufacturing division for industries that have a higher likelihood of using conflict minerals in their product, such as Electronic and Other Electrical Equipment and Components, Except Computer Equipment. Through the database analysis, we were able to determine the filing status, location, revenue, and industry classification of the companies. We were unable to determine the types of products the companies produced, and the types of conflict minerals potentially used in the manufacturing process of their products. Because SIC codes do not indicate specific products, we were unable to use the Nexis data to develop an aggregate description of entities that use conflict minerals but do not report to SEC under the rule. We compiled a list of smelters and refiners—which are a smaller universe of companies that are primarily not required to report under the rule— from the EICC and GeSI’s Conflict Minerals Reporting Template and Dashboard, OECD’s Final Downstream Report On One-Year Pilot Implementation of the Supplement on Tin, Tantalum, and Tungsten, and the LBMA’s Good Delivery List. The data were current as of March 15, 2013. We selected these smelters and refiners because information is publicly available on the types of minerals these smelters and refiners process; however, we did not conduct an audit to verify how these entities sourced materials for processing. To compile our list of smelters and refiners, we reviewed and compared the lists from each source to identify and delete duplicate smelters and refiners. Additional duplicates were identified and deleted as a result of Internet searches using the names of the smelters and refiners. While we made efforts to eliminate duplicate information where possible, smelters and refiners may be listed under different names, and therefore, some duplicate information may exist in the data. Information included in the list of smelters from the EICC and GeSI, OECD, and LMBA included (1) the location of the smelter or refiner, (2) the types of minerals smelted or refined, and (3) the due diligence guidance reportedly followed, in some cases. We identified 278 smelters and refiners of tin, tantalum, tungsten, and gold, and analyzed any publicly available information—mainly information posted on the companies’ websites or information provided on the websites of organizations such as the EICC and GeSI or the LBMA—on their practices and policies for sourcing conflict minerals. This analysis included examining websites of 192 of 278 smelters and refiners to identify the types of due diligence guidance they reported to use to determine the country of origin of their conflict minerals sources. We were unable to identify the websites for 86 smelters or refiners on our list, which could have been the result of a smelter not possessing a website, or differing translations of company names from foreign characters—such as Chinese script or the Cyrillic alphabet—to the Roman alphabet. Additional limitations included our sample of 278 smelters and refiners, as organizations have estimated that the number of smelters and refiners is nearly 500, particularly if smaller smelters and refiners that process ores into metals at the mine site are included. These smelters and refiners, or secondary smelters, often have small operations and may not have a website, according to an industry representative. Furthermore, the number of gold refiners could potentially be much larger, considering that little equipment and space is required to refine gold, depending on the quality; and gold can be refined at the mine site. The 278 smelters and refiners we were able to identify may not be representative of others, and the information we report about these 278 cannot be generalized to other smelters and refiners of tin, tantalum, tungsten, and gold. In response to a mandate in the Dodd-Frank Wall Street Reform and Consumer Protection Act that GAO submit an annual report that assesses the rate of sexual violence in war-torn areas of the DRC and adjoining countries, we identified and assessed any additional published information available on sexual violence in war-torn eastern DRC, as well as three neighboring countries that border eastern DRC—Rwanda, Uganda, and Burundi—since our 2012 report on sexual violence in these areas. During the course of our review, we interviewed officials from State and USAID and interviewed NGO representatives and researchers to discuss the collection of sexual violence-related data—including population-based surveys and case file data—in the DRC and adjoining countries. Specifically, we followed up with researchers and representatives from those groups we interviewed for our prior review on sexual violence rates in eastern DRC and neighboring countries, including officials from the United Nations Population Fund, United Nations High Commissioner for Refugees, United Nations Special Representative to the Secretary-General on Sexual Violence in Conflict; and representatives from the Harvard Humanitarian Initiative and others. We also conducted Internet literature searches to identify new academic articles containing any additional data on sexual violence. We conducted this performance audit from November 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Securities and Exchange Commission (SEC) issued a flowchart summary of the final rule to guide SEC-reporting companies affected by the rule through the disclosure process (see figure 8). In general, the process shows that an SEC-reporting company needs to (1) determine whether its manufactured products contain conflict minerals; (2) determine whether conflict minerals are necessary to the product and, if so, whether the conflict minerals originated in the DRC or an adjoining country; and (3) possibly conduct due diligence and potentially provide a Conflict Minerals Report. In our 2012 report, we discussed a number of initiatives that various stakeholders developed and implemented that may help companies reporting to the Securities and Exchange Commission (SEC) and their suppliers comply with SEC’s conflict minerals disclosure rule. For this report, we updated information pertaining to some of these global and in- region sourcing initiatives. The Organisation for Economic Co-operation and Development (OECD) adopted the OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas (hereafter referred to as OECD Due Diligence Guidance) to promote accountability and transparency in conflict minerals supply chains.Diligence Guidance and the corresponding supplements provide detailed guidance for companies operating in and sourcing minerals from conflict areas. In addition to the basic framework, OECD developed two supplements—one on tin, tantalum, and tungsten and the other on gold— to provide companies with specific guidance relevant to the conflict minerals supply chains. To increase awareness of and to develop emerging practices for implementing the OECD Due Diligence Guidance and the supplement on tin, tantalum, and tungsten, OECD conducted implementation pilot projects. In January 2013, OECD issued the final downstream report, which focuses on how companies implement due diligence in the supply chains of tin, tantalum, and tungsten, and the final upstream report, which provides an overall assessment of the progress The OECD Due and initial impact of due diligence in the tin, tantalum, and tungsten upstream supply chain. The Conflict-Free Smelter Program is a voluntary program in which smelters undergo an independent third party audit, in accordance with the OECD Due Diligence Guidance, to verify the origin of minerals processed at their facilities. The EICC and GeSI have also developed audit protocols for the program in consultation with a number of stakeholders—including NGOs, smelters, component manufacturers, original equipment manufacturers, and industry associations within and outside the electronics industry—to ensure wide-spread support for the program. In December 2010, the first tantalum smelter was certified conflict-free through the program after successfully undergoing an audit, and as of May 1, 2013, 18 of approximately 23 tantalum smelting companies had been certified as conflict-free. As of May 1, 2013, 5 tin smelting companies had been certified as conflict-free, 7 tungsten smelting companies had begun discussions with representatives of the program, and 12 gold refining companies had been certified as conflict-free through the program. The World Gold Council developed and issued the Conflict-Free Gold Standard, an industry-led approach to combat the potential misuse of mined gold to fund armed conflict, in October 2012. The standard was developed with council member companies, which constituted the world’s leading gold producers, and with extensive input from stakeholders to establish a common approach by which gold producers can assess and provide assurance that their gold has been extracted in a manner that does not cause, support, or benefit unlawful armed conflict or contribute to serious human rights abuses or breaches of international humanitarian law. According to a World Gold Council official, the participating companies’ conformance to the Standard will be externally audited and assured and will operationalize the requirements of OECD guidance. The results of the audit using the standard will be recognized across other stakeholder initiatives such as the London Bullion Market Association’s Responsible Gold Guidance. The Standard should also support refiners in meeting their due diligence requirements. The Conflict-Free Tin Initiative (CFTI) is a pilot that was launched in September 2012 and aims to create demand for conflict-free tin from eastern DRC. The traceability and due diligence mechanism through the ITRI Tin Supply Chain Initiative is operated by Pact, an independent NGO, and is operated out of the Kalimbi mine in South Kivu. According to an NGO, the Netherlands Ministry of Foreign Affairs is a neutral broker that brought the partners along the supply chain together, from mine to smelter to end user. The DRC government and local civil society are closely involved in the initiative, which is structured within the framework of the International Conference of the Great Lakes Region (ICGLR) and will be consistent with the due diligence guidance of OECD. CFTI reported that between October 2012 and January 2013, 210 tons of materials were produced in the Kalimbi mine and the first container of conflict-free tin was transported to the trader in the DRC in December 2012. In January 2013, the first two containers of conflict-free tin were shipped to the smelter in Malaysia. The CFTI reports that next steps will involve the conflict-free tin making its way from the smelter to soldering companies and eventually to end users as finished product. The ICGLR started working with an NGO in 2010 to develop a regional certification mechanism to ensure that conflict minerals are fully traceable. ICGLR’s regional certification mechanism may enable member countries and their mining companies to demonstrate where and under what conditions minerals were produced; through the regional certification mechanism, individual member governments are to issue ICGLR regional certificates for those mineral shipments that are in compliance with the standards of the mechanism. According to an official at a partnering NGO, the first two certificates out of the region were scheduled to come from sites in Rwanda and DRC in the late spring and from Uganda by December 2013. However, State indicated that the certificates from Rwanda and DRC have been delayed and will likely not be issued until late summer 2013. Regional certificates from other ICGLR countries will take some time because of capacity issues. According to USAID, in addition to the regional certification mechanism, ICGLR’s other initiatives focused on eliminating the illegal exploitation of natural resources include harmonization of national legislation, formalization of the artisanal mining sector, formalization of the extractives industries transparency initiative, a whistleblowing mechanism, and a regional database on the flow of minerals. In addition to the individual named above, Godwin Agbara, Assistant Director; Andrea Riba Miller; Kyerion Printup; Justin Fisher; Debbie Chung; Ernie Jackson; Russ Burnett; Etana Finkler; Brian Hackney; and Leah DeWolf made key contributions to this report.
The eastern part of the DRC has experienced recurring conflicts involving armed groups that have resulted in severe human rights abuses. In addition, armed groups have profited from the exploitation of minerals. In 2010, Congress enacted Section 1502(b) of the Dodd-Frank Wall Street Reform and Consumer Protection Act to address the exploitation of conflict minerals, which include tin, tantalum, tungsten, and gold, and the extreme levels of violence in the DRC. As required by Section 1502(b), the SEC issued a rule in August 2012 that requires companies to disclose their use of conflict minerals and the origin of those minerals. The act requires GAO to report on the rule’s effectiveness, among other issues, beginning in 2012 and annually thereafter. Initial company disclosure reports to SEC that would enable GAO to assess the effectiveness of the rule will not be due until May 2014. This report describes, among other issues, (1) factors that may impact whether SEC’s rule denies armed groups in the DRC benefits from conflict minerals and (2) information about companies that use conflict minerals and are not required to report to SEC under the rule. GAO reviewed and analyzed documents and interviewed representatives from SEC, the Department of State, the U.S. Agency for International Development, industry associations, NGOs, consulting firms, and international organizations. GAO also analyzed smelter and refiner information. This report does not contain recommendations. Stakeholder-developed initiatives may facilitate companies' compliance with the Securities and Exchange Commission's (SEC) final conflict minerals rule, but other factors may affect the rule's impact on reducing benefits to armed groups in the Democratic Republic of the Congo (DRC) and neighboring countries. Agency and industry officials as well as representatives from international organizations and nongovernmental organizations (NGO) stated that adoption of the rule as well as stakeholder-developed initiatives--which include the development of guidance documents, audit protocols, and in-region sourcing of conflict minerals--can support companies' efforts to conduct due diligence and to identify and responsibly source conflict minerals. For example, officials GAO interviewed explained that the Conflict-Free Smelter Program enables suppliers to source conflict minerals from smelters (companies that refine the ore of the conflict minerals into metals) that have been certified by an independent third-party auditor as obtaining their minerals from sources that did not benefit armed groups. However, officials GAO interviewed cited constraining factors such as lack of security, lack of infrastructure, and lack of capacity in the DRC that could affect the ability to expand on efforts to achieve conflict-free sourcing of minerals from eastern DRC and thereby potentially contribute to armed groups' benefiting from the conflict minerals trade. For example, officials GAO interviewed noted that there is a lack of infrastructure in place that would enable companies to set up or expand operations in the DRC. Limited transportation and poor roads in eastern DRC also make it difficult to get to mine sites. Moreover, according to officials, the remoteness of mines also makes it difficult for DRC officials to validate mines and ensure that the mines have not been compromised by illegal armed groups. Companies that are not required to file disclosures under SEC's conflict minerals rule may be affected by the rule. These companies may supply components or parts that contain conflict minerals to companies that report to SEC under the rule, many of which could be original equipment manufacturers and component parts manufacturers. Estimates provided by public commentators responding to the rule indicate that roughly 280,000 suppliers could provide products to roughly 6,000 companies that report to the SEC under the rule and may be asked to provide information on their use of conflict minerals and the origin of the minerals as part of the rule's due diligence requirements. GAO found little available aggregated information about companies that do not report to SEC under the rule. However, GAO found that for smelters and refiners there is some aggregated information, such as the types of conflict minerals they use and their location. For example, GAO found that over half of the 278 smelters and refiners of conflict minerals it identified were located in Asia, many processed tin, and most did not have a conflict minerals policy publicly available.
The U.S. government has invested more than $5 billion since 2009 in GPS and provides GPS service free of direct charge to users worldwide. As shown in figure 1, GPS consists of the space segment, the ground- control segment, and the user segment. The U.S. Air Force develops, maintains, and operates the space and ground-control segments. The space segment consists of a constellation of satellites transmitting radio signals to users. The Air Force manages the constellation to ensure the availability of at least 24 GPS satellites 95 percent of the time. The ground-control segment consists of a global network of ground facilities that track the GPS satellites, monitor their transmissions, perform analyses, and send commands and data to the constellation. The user segment consists of GPS receiver equipment, which receives the signals from the GPS satellites and uses the transmitted information to calculate the user’s three-dimensional position and time. GPS is used extensively and in various ways in many critical infrastructure sectors for PNT information. For example, among other uses, the communications sector uses the GPS timing function to synchronize call handoffs in wireless communications. The energy sector’s bulk power system uses GPS timing in a component that provides status measurements at frequent points in time. The financial services sector uses GPS timing to time stamp financial transactions, match trading orders, and synchronize financial computer systems. The transportation systems sector uses GPS for safe and efficient operations. For example, aircraft use GPS for en-route navigation and landings; the maritime industry uses GPS for navigation and as a safety and situational tool in high-traffic ports; commercial vehicles use GPS for positioning, navigation, and fleet management; and rail systems use GPS for asset management, tracking, and positive train control, which supports collision avoidance. Presidential directive assigns GPS governance roles, and there are other policies and directives that apply to critical infrastructure protection that are important for GPS governance. These policies and directives include: (1) National Security Presidential Directive 39, (2) Homeland Security Presidential Directive 7, (3) the National Infrastructure Protection Plan, and (4) Presidential Policy Directive 21. National Security Presidential Directive 39 (NSPD-39). NSPD-39 assigns governance roles to numerous federal agencies and other entities. In particular, within DOD, the Air Force is responsible for the overall development, acquisition, operation, security, and continued modernization of GPS. DOT serves as the lead civilian agency on GPS-related issues and has lead responsibility for developing requirements for civilian applications. DHS, through the U.S. Coast Guard’s Navigation Center, provides user support to the civilian, non- aviation GPS community. Additionally, NSPD-39 requires that DOT, in coordination with DHS, develop, acquire, operate, and maintain backup capabilities that can support critical civilian and commercial infrastructure during a GPS disruption. NSPD-39 also assigns DHS (in coordination with other agencies) the responsibility to identify, locate, and attribute any interference within the United States that adversely affects GPS use and to develop a central repository and database for reports of domestic and international interference to GPS civilian services. NSPD-39 also directed the federal government to improve the performance of space-based PNT services, including by developing more robust resistance to interference for national security purposes, homeland security, and civilian, commercial, and scientific users worldwide. Furthermore, NSPD-39 assigns the Department of Commerce and the Federal Communications Commission (FCC) responsibility for mitigating electronic interference with U.S. space- based PNT services within the United States. NSPD-39 also established a National Executive Committee for Space-Based PNT (National Executive Committee), chaired jointly by DOD and DOT, to coordinate GPS-related matters across federal agencies. The National Coordination Office for Space-Based PNT (NCO) houses the permanent staff of the National Executive Committee and provides day-to-day support for the committee’s activities. Among other things, the National Executive Committee issued a 5-year plan for space- based PNT that recommends that DHS institute a risk management approach to assess threats, vulnerabilities, and potential consequences to interference to GPS signals and examine the best opportunities to mitigate those risks. See figure 2 for the national space-based PNT organization structure. Homeland Security Presidential Directive 7 (HSPD-7). Issued in 2003, the purpose of HSPD-7 was to establish a national policy for federal departments and agencies to identify, prioritize, and protect critical infrastructure and key resources. HSPD-7 designated DHS as the agency responsible for coordinating the nation’s efforts to protect critical infrastructure. DHS was directed to coordinate protection activities for each critical infrastructure sector through designated Sector-Specific Agencies (SSA). In accordance with applicable laws or regulations, DHS and the SSAs were directed to collaborate with appropriate private sector entities and continue to encourage the development of information sharing and analysis mechanisms to identify, prioritize, and coordinate the protection of critical infrastructure and key resources. National Infrastructure Protection Plan (NIPP). In 2006, DHS addressed the requirements of HSPD-7 by issuing the first NIPP, which DHS updated in 2009. The NIPP provides an overarching approach for integrating the nation’s many critical infrastructure protection initiatives. The cornerstone of the NIPP is its risk management framework, which defines roles and responsibilities for DHS, the SSAs, and other federal, state, regional, local, and private sector partners. Assessing risks is part of this framework, and the NIPP specifies core criteria for risk assessments. The NIPP specifically identifies GPS as a system that supports or enables critical functions in critical infrastructure sectors. Presidential Policy Directive 21 (PPD-21). Issued in February 2013, PPD-21 supersedes HSPD-7 and states that critical infrastructure must be secure and able to withstand and rapidly recover from all hazards. The directive refines and clarifies the critical infrastructure-related functions, roles, and responsibilities across the federal government, as well as aims to enhance overall coordination and collaboration. PPD-21 directs DHS to conduct comprehensive assessments of the vulnerabilities of the nation’s critical infrastructure in coordination with the SSAs and in collaboration with critical infrastructure owners and operators. Executive Order 13636 was also issued in February 2013 to improve critical infrastructure cybersecurity. According to DHS, implementation of the executive order and PPD-21 includes updating the NIPP by October 2013. Disruption of the GPS signal can come from a variety of sources, including radio emissions in nearby bands, jamming, spoofing, and naturally occurring space weather. Spectrum encroachment from radio emissions in nearby bands can cause interference to the GPS signal when the stronger radio signals overpower the relatively weak GPS signals from space. Additionally, according to FCC, some GPS receivers are purposefully designed to receive as much energy as possible from GPS satellites, which makes the receivers vulnerable to interference from operations in nearby bands. With this type of interference, GPS devices pick up the stronger radio signals and become ineffective. Jamming devices are radio frequency transmitters that intentionally block, jam, or interfere with lawful communications, such as GPS signals. Spoofing involves the replacement of a true satellite signal with a manipulated signal, whereby the user may not realize they are using an incorrect GPS signal and may continue to rely on it. Articles and lab experiments have illustrated potential for harm in the bulk power system, maritime navigation, financial markets, and mobile communications, among other areas. Space weather can also cause interference to GPS signals. For example, during solar flare eruptions, the sun produces radio waves that can interfere with a broad frequency range, including those frequencies used by GPS. In September 2011, to fulfill the National Executive Committee’s request for a comprehensive assessment of civilian GPS risks, DHS issued the National Risk Estimate (NRE) to the NCO; DHS officials said the final NRE was published in November 2012 with minor revisions. According to DHS officials, the NRE is modeled after other risk estimates and efforts in the intelligence community. In developing the NRE, DHS conducted a scenario-based risk assessment for critical infrastructure using subject matter experts from inside and outside government. The NRE focuses on 4 of the 16 critical infrastructure sectors: communications, emergency services, energy, and transportation systems. According to DHS officials, they chose these 4 sectors because they use GPS to support or fulfill their core missions and because they provide an appropriate cross- section of risks and potential impacts that could apply broadly to other sectors. The NRE considers three types of GPS disruption scenarios: (1) naturally occurring disruptions, such as space weather events; (2) unintentional disruptions, such as radio frequency signals interfering with GPS signals; and (3) intentional disruptions, such as jamming or spoofing. DHS solicited information from federal and private sector stakeholders and held several workshops on various risk scenarios with the subject matter experts, including one on the overall likelihood of occurrence of the risk scenarios. DHS also held sector-specific workshops on the consequences of GPS disruptions and alternative futures for each sector based on varying degrees of community attention to these security challenges. DHS used alternative futures to consider the risk outlook over the next 20 years. According to DHS, the process of developing the NRE helped clarify aspects of critical infrastructure dependence on GPS and vulnerability to interference or an outage that were previously uncertain. Specifically, DHS officials told us that the NRE helped them understand the significance of the wide usage of GPS timing in systems throughout the nation’s critical infrastructure. According to DHS, through the NRE workshops and the exchange of ideas, sector representatives also developed greater awareness of risks. Risk assessments, such as the NRE, involve complex analysis; conducting a risk assessment across multiple sectors of systems with many unknowns and little data is particularly challenging. The NIPP specifies core criteria for risk assessments and provides a framework for managing risk among the nation’s critical infrastructure sectors. Aspects of DHS’s NRE are consistent with the NIPP, such as the use of scenarios and subject matter experts and considering both the present and future level of risk. However, the NRE lacks key characteristics of risk assessments as outlined in the NIPP, and the NRE has not been widely used to inform risk mitigation priorities. The lack of an overall DHS plan and time frame to collect relevant threat, vulnerability, and consequence data and to develop a risk assessment approach more consistent with the NIPP could continue to hinder the ability of federal and private leaders to manage the risks associated with GPS disruptions. The NIPP states that risk assessment is at the core of critical infrastructure protection and that it can be used to help address the associated challenges through its framework for assessing risk. The NIPP identifies the essential characteristics of a good risk assessment and calls for risk assessments to be (1) complete, (2) reproducible, (3) defensible, and (4) documented so that results can contribute to cross-sector risk comparisons for supporting investment, planning, and resource prioritization decisions. Our review of these NIPP characteristics with respect to the NRE follows. Complete. According to the NIPP, to be complete, the methodology should assess threat, vulnerability, and consequence for every defined risk scenario. We found the NRE examines these three key elements of a risk assessment but does not fully conform to the NIPP because, as described below, the NRE does not consider all relevant threats or assess the vulnerabilities of each sector reviewed, and the consequence assessment is incomplete because it fails to estimate the potential losses. In addition, the NRE considers just four critical infrastructure sectors. DHS officials acknowledged that their assessment was in some respects limited because they chose not to include all sectors due to resources and time constraints. For example, DHS planning documents state that they had originally planned to include the banking and finance sector, but DHS officials told us that they dropped it when they could not identify the subject matter experts necessary to complete a risk analysis. The NIPP highlighted the importance of the banking and finance sector as a high-risk critical infrastructure sector, noted that nearly all sectors share relationships with banking and finance, and stated that banking and finance relies on GPS as its primary timing source. Reproducible. According to the NIPP, the methodology must produce comparable, repeatable results, and minimize the use of subjective judgments, leaving policy and value judgments to be applied by decision makers. We found the NRE does not conform to the NIPP because it is based entirely on subjective judgments of panelists and is not reproducible. Three subject matter experts we interviewed told us they were skeptical about the quality of the panel deliberations and characterized the member judgments as “educated guesses.” Moreover, if different panelists were chosen, the results might have been different. Defensible. According to the NIPP, the methodology should be logical, make appropriate use of professional disciplines, and be free of significant errors and omissions. Uncertainty of estimates and level of confidence should be communicated. The NRE addresses some of these standards, including identifying various uncertainties related to its estimates. However, it is unclear that DHS made appropriate use of professional disciplines. Given the lack of data, subject matter experts were called upon to inform DHS’s statistical modeling. DHS officials told us that they depended on the SSAs to suggest subject matter experts and used a consultant to identify subject matter experts beyond the SSAs’ suggestions. However, industry representatives we interviewed questioned whether the panels had sufficiently broad expertise to capture the full scope of GPS vulnerabilities within sectors. For example, energy sector industry representatives told us that the energy sector panel experts only covered certain aspects of the electricity industry, not the entire energy sector. DHS officials told us that that at times the SSAs had difficulty suggesting subject matter experts. According to one official, it was difficult to find people within the various sectors who understood how GPS was embedded in their operations; he noted that sometimes it took 20 to 30 telephone calls in a given sector to locate an individual well-versed on the subject. However, decisions on expert selection are not documented in the NRE, meaning others cannot reasonably review and reproduce the NRE’s efforts. In addition, we found the NRE’s calculations of risk are not sufficiently transparent to assess whether the risk estimates are defensible and free of significant error. For example, the NRE’s documentation is insufficiently transparent to support its determination that unintentional interference is a high risk for all four selected sectors where likelihood is high, but the consequences are deemed to be fairly low for three of the four sectors. Further, in the energy sector, a sophisticated, coordinated, continuous pinpointed spoofing attack against multiple targets is rated as having greater consequences than the other scenarios, yet due to its low estimated likelihood, is rated as having the lowest risk for energy scenarios. Without adequate explanation or presentation of the underlying data, the NRE lacks the transparency to verify that the estimate is defensible and free of significant error. Similarly, scenarios with the greatest uncertainty are rated as having the highest risk without sufficient data for an independent reviewer to verify. We requested additional documentation of these estimates, but DHS did not provide it. Documented. According to the NIPP, the assumptions and methodology used to generate the risk assessment must be clearly documented. The NRE did include elements that were consistent with the NIPP, such as describing the NRE’s underlying analytic assumptions, its various workshops on likelihood and consequences, and its use of subject matter experts and a statistical simulation model to overcome limited data. Nonetheless, we found that overall, the NRE does not conform to this guideline because, as previously noted, it does not document how the subject matter experts, who were identified as inside and outside government, were selected. Absent reliable data, the NRE depends on the reliability of the expert panels. This and other documentation issues, such as not fully reporting the underlying data supporting the risk calculations, also affect the NRE’s reproducibility and defensibility. Furthermore, the NIPP states that risk is a function of three components—threat, vulnerability, and consequence—and a risk assessment approach must assess each component for every defined risk scenario. We found that there are factors in the NRE’s analysis that specifically undermined the validity of the three components of a risk assessment, as follows. Threat. According to the NIPP, risk assessments should estimate an intentional threat as the likelihood that an adversary would attempt a given attack method against a target, and for other hazards, threat is generally estimated as the likelihood that a hazard will manifest itself. To complete the NRE, DHS issued data calls and held a workshop on the overall likelihood of GPS disruptions. Nonetheless, the NRE overall does not conform to the NIPP because the NRE neither uses its threat assessment to inform its threat-likelihood rankings, nor considers all relevant threats. In a separate classified annex, the NRE considers the threat likelihood of a range of GPS disruptions, which follows NIPP guidance to consider terrorist capability and intent. However, DHS officials told us that this threat information was not used for the NRE. DHS officials stated that the DHS Office of Intelligence and Analysis had not provided a draft of the threat annex in time for the May 6, 2011, scenario likelihood workshop, so the annex could not inform the ranking of the scenario likelihoods. The NIPP also requires an all-hazards approach for risk assessment. DHS officials told us that their selection of GPS disruption scenarios was based on discussions with subject matter experts. However, it is unclear how the threats for the risk scenarios were selected. For instance, while the NRE cites the threat of spectrum encroachment, which involves the potential for interference from new communication services near GPS frequencies, and considers alternative futures scenarios based in part on how potential spectrum encroachment is managed, it is not clear why the risk scenarios did not include the risk of interference to GPS receivers from operations in other frequency bands. DHS officials told us that while the spectrum encroachment issue was relevant and a topic of discussion with subject matter experts during the NRE’s development, it was outside the scope of what the NRE sought to assess because it stems from policy making rather than a threat from potential adversaries. Vulnerability. The vulnerability assessment in the NRE does not meet the criteria in the NIPP because it does not identify vulnerabilities specific to the sector nor the GPS dependencies of the sectors’ key systems. Instead, the NRE assessed general vulnerabilities that did not consider specific sectors or the key systems used by those sectors. Without such a sector-specific assessment, the NRE does not adequately identify critical infrastructure systems’ vulnerabilities and critical dependencies, nor develop estimates of the likelihood that an attack or hazard could cause harm. The NRE states that DHS was constrained in conducting unique vulnerability assessments for each of the four sectors because of limited data and key uncertainties. The NRE acknowledges that this constraint is a limitation of the report and that a likelihood workshop was used to estimate a combined threat and vulnerability assessment. Consequence. The NIPP states that at a minimum, consequences should focus on the two most fundamental components—human consequences and the most relevant direct economic consequences. For the NRE, DHS held sector-specific workshops on the consequences of GPS disruptions and projected a risk outlook over the next 20 years. However, the NRE focuses on assessing the potential impacts on sector functions, but does not assess how disruptions in those sector functions could affect the economy or safety of life. Without more specific analysis of the consequences, the overall risks from GPS disruptions cannot be calculated or compared across all sectors. DHS officials acknowledged that this was an area for improvement. The NRE also discusses sector interdependencies at a high level, but DHS did not survey the potential economic or safety-of-life consequences of these interdependencies. The NIPP and other DHS guidance states that risk assessments are to be used to inform planning and priorities; however, we found the NRE has not been widely used. In particular, in addition to the NIPP guidance, the DHS strategic plan and risk management framework state that risk assessments should be used to inform and prioritize risk mitigation. The NRE states that it is to be used to inform executive-level decisions. The NCO told us that the NRE’s intended use was to help inform senior government officials about the risks posed to the nation’s critical infrastructure sectors by relying upon the GPS signal. NCO officials stated that they and the National Executive Committee, which requested the study, were satisfied with the NRE. The NRE has also been distributed to other federal agencies. One DHS component, the Office of Cybersecurity and Communications, told us that the NRE had been helpful in understanding some of the threats, especially to timing, but officials from another component, the Transportation Security Administration (TSA), told us that they are not using the NRE. For example, TSA officials said they found the NRE to be very general and did not see the relevance to TSA. Officials from two other agencies, the Departments of Defense and Energy, told us that the NRE was not helpful. Subject matter experts we contacted, some of whom participated in the NRE, expressed their concerns about the validity of the NRE, and one noted that industry does not have access to the final NRE because it is designated “For Official Use Only” (FOUO). DHS officials told us that in 2013, DHS began using the NRE to inform the planning and prioritization of initial steps to raise awareness of GPS disruptions. For example, among other things, they uploaded the NRE to a homeland security information-sharing portal to share with sector partners, and they told us that they have recently begun using the NRE for outreach to raise sector awareness but, as to specific guidance, they could only provide an example of brief correspondence encouraging sectors to identify their specific sources for PNT data. It has been 2 years since the NRE was issued and these preliminary steps do not rise to the level of a plan and a time frame to address how the considerable data gaps across 16 critical infrastructure sectors are to be closed. In response to the National Executive Committee’s request for a risk and mitigation assessment, DHS commissioned a separate study that was performed concurrently with the NRE. According to the NIPP, mitigation approaches should use the risk assessment’s results to establish priorities and determine those actions that could provide the greatest risk mitigation benefits and inform planning and resource decisions. The mitigation report does not use the risk assessment’s results of the NRE and instead, focused on generic mitigation issues and technologies. As a result, it is unclear whether the pursuit of the mitigation report’s recommendations would address the highest risks of GPS disruption to critical infrastructure. DHS officials acknowledged the data and methodological limitations of the NRE, but stated that they have no plans to conduct another NRE on GPS because of resource constraints. The lack of an overall DHS plan and time frame to collect relevant data, periodically review the readiness of data to conduct a more robust risk assessment, and develop a risk assessment approach more consistent with the NIPP could continue to hinder the ability of federal and private leaders to manage the risks associated with GPS disruptions. Based on our review, opportunities exist for DHS to develop an enhanced risk assessment. For example, recent assessments performed by the private sector continue to report that the risk associated with GPS disruptions are a growing concern and that there are potential economic consequences. By considering this additional threat, vulnerability, and consequence information, DHS would be better positioned to employ a GPS risk assessment approach consistent with the NIPP. Furthermore, as previously mentioned, the National Executive Committee’s 5-year plan for 2009-2013 also recommends that DHS institute a risk management approach to assessing threats, vulnerabilities, and potential consequences to interference to GPS signals and examine the best opportunities to mitigate those risks. Because of the shortcomings we found in the NRE, we do not believe that DHS has instituted an adequate risk management approach to address the risks associated with GPS interference. According to a presidential directive, DOT, in coordination with DHS, is required to develop, acquire, operate, and maintain backup capabilities that can support critical civilian and commercial infrastructure in the event of a GPS disruption. NSPD-39 also assigns DHS (in coordination with other agencies) the responsibility to identify, locate, and attribute any interference that adversely affects GPS use and to develop a central repository and database for reports of domestic and international interference. DOT and DHS have initiated a variety of ongoing mitigation efforts that contribute to fulfilling their presidential directive, such as (1) developing plans and strategies for the nation’s PNT architecture, (2) researching GPS alternatives for aviation, (3) developing plans and strategies for GPS interference detection, (4) researching possibilities for a nationwide timing backup, and (5) conducting other studies. Developing plans and strategies for the nation’s PNT architecture. As a precursor to providing GPS backup capabilities per NSPD-39, DOT, in conjunction with DOD and with participation from 31 government agencies, including DHS, developed a national PNT architecture report and implementation plan to help guide the federal government’s PNT investment decisions. Issued in 2008, the National PNT Architecture report documented the nation’s current mix of “ad hoc” PNT sources and identified a number of capability gaps. To address these gaps, the report recommended that the nation transition to a “greater common denominator” strategy, where the PNT needs of many users are efficiently met through commonly available solutions, rather than numerous, individual systems. Additionally, the report acknowledged that GPS is the cornerstone of the nation’s PNT capabilities and made a number of recommendations that would ensure continued availability of PNT service during GPS disruptions through, for example, the ability to provide PNT from alternative sources when a primary source is not available. The National PNT Architecture implementation plan, released in 2010, identified the tasks federal agencies would need to take to implement the report’s recommendations. Researching GPS alternatives for aviation. Through the Federal Aviation Administration’s (FAA) Alternative PNT initiative, DOT is researching potential GPS backup solutions for the Next Generation Air Transportation System (NextGen). To meet NextGen’s navigation and performance requirements, GPS will be the primary navigation aid for aircraft. According to FAA officials, the legacy navigation systems currently used by aircraft during GPS disruptions are not capable of supporting new NextGen capabilities. As a result, FAA is conducting feasibility studies and analysis on three potential systems that can be used as a GPS backup for NextGen and, according to FAA officials, expects to make a decision by 2016. Developing plans and strategies for GPS interference detection. In 2007, DHS began efforts on GPS interference detection and mitigation (IDM) to improve the federal government’s ability to detect, locate, and mitigate sources of GPS interference. Among DHS’s planned activities were developing a central repository for GPS interference reports, and identifying GPS backup-system requirements and determining suitability of backup capabilities. Researching possibilities for a nationwide timing backup. According to DHS officials, in 2012 the Coast Guard entered into a research agreement with a technology company to test alternative, non-space-based sources of precise time. Additionally, according to DHS officials, in late 2012 the National Institute of Standards and Technology began researching the possibility of using the nation’s fiber networks as an alternative, non-space-based source of precise time. Both research efforts are ongoing. Conducting other studies. DHS has conducted or commissioned other studies related to GPS mitigation. For example, in 2009, DHS surveyed federal agencies to better understand their GPS capabilities, requirements, and backup systems. However, not all SSAs responded to DHS’s requests for information. As previously mentioned, DHS also commissioned a study of GPS risk mitigation techniques, which was conducted concurrently with the NRE and issued in 2011. Among other things, the study described actions that GPS users can take to improve the resiliency of their GPS receivers against jamming and spoofing and recommended that federal regulators of critical infrastructure ensure that the infrastructure they regulate possesses sufficient resiliency to operate without GPS timing. According to DHS officials, DHS continues to examine the study’s findings and recommendations, although specific actions remain unbudgeted. In commenting on a draft of this report, DHS noted that it also awarded funding in May 2013 to develop technologies to detect and localize sources of GPS disruptions, among other things, and in July 2013, commissioned a study to assess potential sector-specific and cross- sector threat mitigation technologies, among other things, for the communications sector and electricity subsector of the energy sector. Although DOT and DHS have taken the above initiatives, they have made limited progress implementing their plans to develop, acquire, operate, and maintain backup capabilities and, overall, the requirements of NSPD- 39 remain unfulfilled. For example, with respect to DOT efforts, little progress has been made on the tasks outlined in the National PNT Architecture implementation plan since its issuance 3 years ago. DOT officials cited a variety of reasons why additional progress has not been made, including resource constraints, uncertainty, and competing priorities. In particular, DOT assigned lead responsibility for PNT to the Research and Innovative Technology Administration (RITA), yet RITA’s Office of PNT and Spectrum Management has three full-time staff members, one of whom works on the National PNT Architecture implementation plan in addition to other responsibilities. One senior DOT official involved in GPS management also stated that, organizationally, another key issue was uncertainty surrounding which federal agencies would take responsibility for ensuring the plan was implemented and for funding the various tasks and programs. According to this official, the implementation plan did not get optimal support from federal agencies that were assigned tasks because these agencies did not have resources to devote to completing those tasks. In addition, DOT officials said little progress was made on the implementation plan because immediately after its issuance in 2010, DOT staff with GPS expertise shifted their focus to proceedings surrounding a wireless broadband network proposal by a company called Lightsquared—a proposal which government officials, industry representatives, and GPS experts demonstrated could cause significant GPS interference. However, DOT officials stated that information supporting the implementation plan has been incorporated into the most recent Federal Radionavigation Plan. Similarly, DHS has completed few IDM activities, though the agency has taken some steps. For example, DHS established an incident portal to serve as a central repository for all agencies reporting incidents of GPS interference and developed draft interagency procedures and a common format for reporting incidents. The incident portal is hosted by FAA, but due to its security policy, other agencies are not able to access the portal. Other activities remain incomplete, including those related to identifying GPS backup-system requirements and determining suitability of backup capabilities. DHS officials cited a variety of reasons why they have not made additional progress, such as insufficient staffing and budget constraints. With respect to insufficient staffing, DHS’s PNT Program Management Office, which leads the agency’s IDM efforts, has three full-time staff members, one of whom is currently working in another component of DHS. With respect to budget constraints, DHS officials in the PNT Program Management Office stated that it is difficult to obtain financial resources in the current constrained budget environment. While DHS is in the process of formally implementing and standardizing procedures for information sharing among agency PNT operations centers when GPS disruptions occur, it does not have plans intended to address some other IDM activities, such as those related to development of GPS backup requirements and analysis of alternatives for backup capabilities. Additionally, stakeholders expressed concern that DHS’s IDM efforts are separated from other critical infrastructure protection efforts within DHS, but DHS has indicated that a new interagency task force will increase coordination between these efforts. Specifically, DHS’s National Protection and Programs Directorate (NPPD) leads and manages efforts to protect the nation’s 16 critical infrastructure sectors, but the PNT Program Management Office, within the Office of the Chief Information Officer, leads DHS’s IDM efforts, as shown in figure 3. Members of the Advisory Board and the GPS experts from academic and other research institutions we spoke with expressed concern that this organizational structure means that GPS management does not receive the same level of attention and resources as the agency’s other efforts to protect key national assets. DHS previously acknowledged that the agency’s GPS efforts were event-driven, that resources were provided on an ad-hoc basis, and that NPPD was uniquely structured to fulfill many of NSPD- 39’s objectives, given its role of developing risk-mitigation strategies for critical infrastructure protection efforts. However, regarding this organization, DHS officials said that GPS expertise has been within the Office of the Chief Information Officer since DHS’s creation and that the positions were originally hired to fulfill other DHS missions. As PNT issues became more prevalent, these positions evolved into the PNT Program Management Office. The officials noted that through a new interagency task force formed in April 2013, NPPD will have increased involvement in the agency’s IDM efforts. Figure 4 provides a timeline of DOT’s and DHS’s efforts to provide GPS backup capabilities since the issuance of NSPD-39 in 2004. In addition to the challenges described above, DOT and DHS’s ability to provide for backup capabilities as specified in the presidential directive has been hampered by a lack of effective collaboration. In prior work, we have identified key elements of effective collaboration that can help enhance and sustain collaboration among federal agencies, thereby maximizing performance and results. Specifically, we have previously found that key elements of effective collaboration include clearly defining (1) roles, responsibilities, and authorities; (2) outcomes and monitoring progress toward outcomes; and (3) written agreements regarding how agencies will collaborate. DOT and DHS have not followed these practices; for example: Roles and responsibilities. DOT and DHS have not clearly defined what each agency’s respective roles, responsibilities, and authorities are in terms of satisfying the presidential directive to provide GPS backup capabilities. Defining roles and responsibilities ensures that agencies have clearly articulated and agreed on which entity will do what and what is expected of each party. Various discussions we had with DOT and DHS officials indicated there is considerable confusion and lack of clarity between the agencies about what their roles, responsibilities, and authorities are, despite the guidance in NSPD-39. For example, DOT officials told us that they handle backup capabilities for aviation, but they depend on DHS and industry to provide backup capabilities for the other critical infrastructure sectors. DOT officials questioned why DOT would provide backup capabilities for non-transportation sectors and whether doing so would make sense. The DOT officials highlighted that sectors look to DHS for cross-sector capabilities to protect key national assets, such as GPS, and that DHS is better positioned to lead this effort given its mission and experience with managing and mitigating risks to critical infrastructure sectors. However, DHS officials we contacted told us that NSPD-39 places lead responsibility with DOT, not DHS. They stated that DHS has no legal basis or other authority to require that GPS users take measures to mitigate GPS disruptions by having backup capabilities in place. DHS officials also said that it may be industries’ and individual sectors’ responsibilities to ensure their systems have GPS backup capabilities, in coordination with their SSA. A DOD official and the GPS experts from academic and other research institutions we contacted also noted that it is not clear what entity or agency oversees GPS risk management for the different sectors and whether DHS has authority to require sectors to demonstrate that they have backup capabilities. Further, stakeholders highlighted that it is unclear how the NSPD-39 backup-capabilities requirement fits in with the NIPP risk management framework DHS uses for critical infrastructure protection. Specifically, DOT and DHS officials noted that NSPD-39 predates the issuance of the first NIPP in 2006, which, as previously described, established the critical infrastructure protection risk management framework. As such, DOT and DHS officials, a DOD official, members of the Advisory Board, and the GPS experts from academic and other research institutions we contacted said that the NSPD-39 backup-capabilities requirement may be outdated and could require updating to better reflect current risk management guidance that, DOT officials added, would include operational mitigations in addition to backup systems. For example, DHS officials noted that the NIPP risk management framework indicates that SSAs are responsible for working with DHS to coordinate infrastructure protection for their sector, including backup capabilities. One DHS official said that his goal would be to have each critical infrastructure sector’s Sector-Specific Plan address GPS disruptions. Outcomes and monitoring progress. DOT and DHS have not established clear, agreed-upon outcomes that clarify what would satisfy the NSPD-39 backup-capabilities requirement, and neither agency has been consistently monitoring its progress. Establishing clear outcomes for efforts that require collaboration ensures that agencies have agreed on how they will satisfy mutual responsibilities and what specifically they are working toward. DOT’s and DHS’s confusion about roles described above indicates that the agencies have not done so. Additional statements made by the agencies also indicate that there may still be uncertainty about the desired outcome. For example, while DHS officials said that it might be each individual sector’s responsibility to provide its own GPS backup solutions, DOT officials stated that individual solutions for every sector would be redundant and inefficient and that DOT does not desire a sector- based architecture for GPS backup capabilities. Additionally, DHS officials told us that a single, domestic backup to GPS is not needed, and DOT officials told us that a single backup solution fulfilling all users’ needs would not be practical. Nevertheless, DOT officials stated that the Coast Guard’s decommissioning of LORAN-C was a loss for the robustness of GPS backup capabilities, especially given that both DOT and DHS had supported the upgrading of LORAN-C to eLORAN as a national GPS backup. Written agreements regarding collaboration. DOT and DHS have not documented their agreements regarding how they will collaborate to satisfy their NSPD-39 backup-capabilities requirement. In prior work, we have found that the action of two agencies articulating roles and responsibilities and a common outcome into a written document is a powerful collaboration tool. Accordingly, we have recommended many times that collaborations benefit from formal written agreements, such as a memorandum of understanding or agreement. While the agencies have individual mitigation efforts that contribute to fulfilling the NSPD-39 backup-capabilities requirement, as described above, they do not have a written agreement that considers all of these efforts and provides a unified, holistic strategy for how the agencies are addressing their shared responsibility. According to DOT and DHS officials, the agencies are in the process of finalizing a written agreement on interagency procedures for information sharing among agency PNT operations centers when GPS disruptions occur (to which DOD will also be a signatory), but are not developing any type of written agreement memorializing how they will collaborate to satisfy the NSPD-39 backup-capabilities requirement. Without clearly defining both roles and desired outcomes for efforts that require collaboration, DOT and DHS cannot ensure that they will satisfy mutual responsibilities. DOT stated that the rationale behind developing a national PNT architecture was the absence of coordinated interagency efforts on PNT, which could lead to uncoordinated research efforts, lack of clear developmental paths, potentially wasteful procurements, and inefficient deployment of resources. Additionally, DHS has reported that the well-established presence of effective backup capabilities could discourage threats to GPS in the first place. In light of the recent issuance of PPD-21 in February 2013, DOT, DHS, DOD, and the National Aeronautics and Space Administration formed a Critical Infrastructure Security and Resiliency scoping group to address the needed resiliency of critical infrastructure relying on GPS, and subsequently, the National Space-Based PNT Executive Steering Group established an Interagency IDM/Alternative PNT task force in April 2013. According to DHS officials, the task force plans to review and update planned IDM activities, and as previously noted, through the task force, NPPD will have increased involvement in the agency’s IDM efforts. Such activities could provide an opportunity for DOT and DHS to address their challenges and uncertainties and document their agreements. However, as of July 2013, there was still confusion between the agencies on these future activities. For example, DOT officials stated that according to their current understanding based on guidance from the NCO, the task force would mostly monitor activities while DHS highlighted a broader scope of activity for the task force, including elevating awareness of critical sectors’ dependencies on GPS. Agency officials and industry representatives from the four critical infrastructure sectors we contacted said their sectors would generally be able to withstand short-term GPS disruptions and provided examples of strategies to mitigate GPS disruptions for aspects of sector operations, as follows. Communications. The communications sector, which uses GPS to synchronize timing of communications and for location-based services, employs a range of strategies to mitigate GPS disruptions. For example, at large critical communication nodes (e.g., mobile wireless and wireline-switching centers, satellite control centers), atomic clocks are often deployed to backup GPS. However, some of the most precise timing mechanisms may not be deployed widely across communications networks, and the type and level of redundancies vary across the network and across industry providers. Communications sector industry representatives believe GPS disruptions lasting over 24 hours would likely cause interruption of mobile communication services because call handoffs between cell sites would begin to fail. Energy. For one aspect of the energy sector—the bulk power system—DOE officials and energy sector industry representatives told us that the sector uses GPS to get frequent time measurements on the state of the system, but that the industry does not rely on GPS to operate the system at this time. The representatives noted that the bulk power system has built-in redundancies and, in the event of a GPS disruption, could rely on other systems that provide less frequent time measurements. Financial services. According to Department of the Treasury officials, the financial services sector primarily relies on atomic clocks to time-stamp financial transactions; GPS is used as a secondary timing source in the communications protocols of these transactions. In the event of a GPS disruption, Treasury officials noted that the financial services sector has a risk management process in place, which includes hardware, software, and operational procedures to detect and mitigate any disruptions in communications. Transportation systems. Within the transportation systems sector, for aviation, FAA officials said that multiple legacy navigation systems that are not reliant on GPS signals can enable aircraft to fly and land in the event of a GPS disruption. DHS officials noted that alternate means of navigation, such as radar and visual references to landmarks, are available for maritime users. An industry representative and a TSA official from the rail and commercial vehicle segments of the transportation systems sector, respectively, said that they do not currently need extensive GPS mitigation efforts since other means, such as maps and cell phone communication, can be used for navigation. According to critical infrastructure sector agency officials and industry representatives we contacted, three of the four sectors have initiated efforts to study GPS vulnerability and potential mitigations, but have not yet implemented sector-wide mitigation efforts for various reasons. Some stakeholders told us they focus mitigation efforts on higher-priority threats. For example, energy sector industry representatives and financial services sector agency officials said that they are less concerned about GPS disruptions than other threats, like cybersecurity. The 2012 and 2013 annual summit agendas of a financial industry group dedicated to industry collaboration on critical security threats addressed cybersecurity threats and excluded threats from GPS disruptions. Sectors may be reluctant to bear significant costs for mitigation efforts because GPS disruptions are often perceived as low risk since the number of reported incidents is relatively low. For example, in 2012, only 44 incidents were reported to the Coast Guard, which fields reports of GPS disruptions. However, it is unclear the extent to which incidents have been properly reported. According to Coast Guard officials, GPS users are frequently unaware that the Coast Guard serves as the civilian focal point for reporting GPS disruptions, and oftentimes users do not report incidents because they assume a software glitch is the source of the problem. Furthermore, incidents caused by jammers (i.e., personal privacy devices) are often perceived as low impact events, generally due to their localized impact and popular use to avoid tracking of individuals. High- impact events, such as extreme solar storms, spoofing, and high-power jammers—which can impact a larger geographic area, or can have larger consequences in terms of safety, loss of life, and economic loss—are perceived as low probability. Although the sectors have taken steps to prepare for GPS disruptions, DHS has not measured the effectiveness of sectors’ mitigation efforts to ensure sector resiliency against GPS disruptions. DHS officials told us that during 2013, DHS has been focused on increasing awareness of GPS embeddedness and potential disruptions within three sectors—the communications, information technology, and transportation systems sectors. According to DHS and NCO officials, no plan or timeline has been developed or approved for identifying and assessing measures of effectiveness. DHS officials indicated it is not necessary to measure effectiveness of individual programs and that the absence of resilience measures for an individual program does not mean that DHS is not measuring overall resilience at the sector level. Furthermore, DHS officials stated that the absence of a single measure at the program level may be for several reasons, including that the cost of data collection and analysis would be too great. However, the NIPP cites the importance of measuring program effectiveness and the use of performance metrics to track the effectiveness of protection programs. Specifically, the NIPP requires DHS to work with SSAs and sector partners to measure the effectiveness of critical infrastructure protection programs by establishing performance metrics that enable DHS to objectively assess improvements, track progress, establish accountability, document actual performance, provide feedback mechanisms, and inform decision- making. More recently, PPD-21 emphasizes efforts to strengthen and maintain resilient critical infrastructure and requires DHS to use a metrics and analysis process to measure the nation’s ability to manage and reduce risks to critical infrastructure. Additionally, PPD-21 emphasizes addressing resiliency in an environment of critical infrastructure’s interconnectedness and interdependencies. As previously discussed, GPS supports interconnected systems both within and across sectors and GPS disruptions represent potential risks to critical infrastructure. With regard to measuring effectiveness, we have previously recommended that DHS develop performance measures to assess the extent to which sector partners are taking actions to resolve resiliency gaps identified during various vulnerability assessments. We have also previously recommended that outcome-based measures would assist DHS in assessing effectiveness of sector protection efforts. GPS experts we contacted from academic and other research institutions noted that focusing on measuring outcomes—and not just on testing the GPS devices—in critical systems and sectors is important because several factors can affect mitigation effectiveness in the event of a GPS disruption: the GPS devices, the systems and equipment dependent on those devices, and the personnel and operational procedures that rely on GPS. While DHS requested SSA input for the NRE and stated they held tabletop exercises with other government agencies to test agency coordination processes in the event of a GPS disruption incident, DHS has not measured the effectiveness of mitigation efforts in terms of sector resiliency to GPS disruptions in the sectors we reviewed. Furthermore, the four Sector-Specific Plans submitted to DHS that we reviewed did not include any reference to GPS mitigation efforts. As a result of not having measurements, or a plan to assess the impact of GPS disruptions on critical infrastructure sectors, DHS cannot provide assurance that the critical infrastructure sectors would be able to maintain operations in the event of a GPS disruption without significant economic loss, or loss of life. Measuring effectiveness of mitigation efforts on potential GPS disruptions as part of measuring sector resiliency is important because agency officials, industry representatives, and GPS experts have raised a number of concerns about the sectors’ ability to sustain operations during GPS disruptions. For example, they raised the following concerns: Low awareness. Sector awareness of the extent to which GPS is embedded in their systems is frequently unknown and understated, thereby affecting their ability to plan appropriate mitigations. For example, DHS officials and the GPS experts from academic and other research institutions we contacted cited a GPS incident in San Diego that impaired normal operations in the communications, maritime, and aviation sectors, even though it was a short-term disruption, which according to communications sector industry representatives, should not have impaired operations because of the sector’s backup and mitigation measures. Separately, in the maritime industry, we heard from Coast Guard officials that multiple shipboard systems are dependent on GPS and mariners may not be aware of the dependencies. In a United Kingdom maritime GPS disruption test, numerous alarms sounded on the ship’s bridge due to the failure of different systems, and the test raised concerns that GPS signal loss could lead to hazardous conditions for mariners. Sustainability. The degree to which backup systems can sustain current levels of operations and users are able to operate legacy backup systems is unknown. Coast Guard officials indicated that mariners who are accustomed to relying on GPS may no longer have the skills or staff to adequately use legacy backup systems, and that the legacy systems may be less efficient, causing economic losses. For example, according to Coast Guard officials, if GPS were disrupted for a day or more in a major port, it could result in millions of dollars of losses due to inefficiencies in managing ship and cargo traffic. Increasing dependency. Use of GPS is growing and it is unclear what mitigations would be effective with increased GPS use. For example, in the energy sector, as GPS is increasingly used to monitor the bulk power system, reliance on GPS in the long term may become more critical in grid operations. According to a DOE official, DOE validated the lab tests of an academic expert who demonstrated the vulnerability of GPS-based bulk power system monitoring equipment to a spoofing attack and has efforts under way to determine the long- term implications of increasing GPS dependency. The aviation segment of the transportation systems sector will also be more dependent on GPS. As previously described, GPS will be the primary navigation aid under NextGen and FAA plans to eventually decommission much of the current legacy navigation systems and replace them with potentially new, alternative PNT systems currently being researched. In the rail segment of the transportation systems sector, the use of GPS to provide safety benefits through positive train control is increasing, and DOT has indicated that degradation or loss of GPS could, in the future, result in rail network congestion or gridlock. Sector interdependencies. Interdependencies among sectors may not be well understood. For example, FAA reported that while its air traffic control systems have backup systems for GPS, its communication systems rely on the communications sector, which might experience some problems in the event of GPS disruptions. Therefore, one sector’s lack of appropriate mitigation may affect other sectors. Likelihood of disruptions. According to the stakeholders, the likelihood of GPS disruptions could be growing and may be underestimated by sectors and DHS. DHS officials and the GPS experts from academic and other research institutions we contacted cited that an Internet search for “GPS jammer” yielded approximately 500,000 results. They noted that over time, as the technology advances, these jammers are likely to become smaller, more powerful, and less expensive, increasing the likelihood of disruptions. Additionally, in the last few years, a growing number of papers and industry presentations are available on the Internet that discuss or show the ability to spoof GPS receivers in multiple sectors, which agency officials said could increase the likelihood of spoofing. Furthermore, GPS experts indicated that the unintended interference produced by the introduction of new communication services near the GPS frequencies has the potential to greatly disrupt reception of the relatively weak GPS signal, and indicated the difficulty of estimating these disruptions in advance and isolating them. GPS is essential to U.S. national security and is a key component in economic growth, safety, and national critical infrastructure sectors. As GPS becomes increasingly integrated into sectors’ operations, it has become an invisible utility that users do not realize underpins their applications, leaving sectors potentially vulnerable to GPS disruptions. We recognize that risk assessments, such as the NRE, involve complex analysis and that conducting a risk assessment across multiple sectors of systems with many unknowns and little data is particularly challenging. Although DHS attempted to overcome these challenges, the NRE also lacks some of the key characteristics of risk assessments outlined in the NIPP and, as a result, is incomplete. As such, the NRE is limited in its usefulness to inform mitigation planning, priorities, and resource allocation. Furthermore, the lack of an overall DHS plan designed to address the NRE’s shortcomings, such as lack of data, and enhance its risk assessment approach, such as by using available threat assessments, could hinder future public and private risk management of GPS. A plan and a time frame for developing a more complete data- driven risk assessment that also addresses the deficiencies in the NRE’s assessment methodology would help DHS capitalize on progress it has made in conducting risk assessments and contribute to the more effective management of the increasing risks to the nation’s critical infrastructure. Such steps also would provide DHS planners and other decision makers with insights into DHS’s overall progress and a basis for determining what, if any, additional actions need to be taken. Federal agencies and experts have reported that the inability to mitigate GPS disruptions could result in billions of dollars of economic loss. Critical infrastructure sectors have employed various strategies to mitigate GPS disruptions, but both the NRE and stakeholders we interviewed raised concerns that since sector risks are underestimated, growing, and interdependent, it is unclear whether such efforts are sufficient. Federal risk management policy requires DHS to work with SSAs and sector partners to measure the nation’s ability to manage and reduce risks to critical infrastructure by using a metrics and analysis process. However, we found DHS has not measured the effectiveness of sector mitigation efforts to GPS disruptions. As a result, DHS cannot ensure that critical infrastructure sectors could sustain essential operations during GPS disruptions. The lack of agreed-upon metrics to measure the actual effectiveness of sector mitigation efforts hinders DHS’s ability to objectively assess improvements, track progress, establish accountability, provide feedback mechanisms, or inform decision makers about the appropriateness of—or need for additional—mitigation activities. We previously recommended that DHS develop performance measures to assess the extent to which sector partners are taking actions to resolve resiliency gaps identified during the various vulnerability assessments. Measuring effectiveness of mitigation efforts on potential GPS disruptions as part of measuring sector resiliency is important because agency officials, industry representatives, and GPS experts have raised a number of concerns about the sectors’ ability to sustain operations during GPS disruptions. Although the President directed DOT, in coordination with DHS, to develop backup capabilities to mitigate GPS disruptions, the agencies have made limited progress amid continued uncertainty. Both agencies cited resource constraints—such as budget and staffing—as a reason why they have not made additional progress. Nevertheless, DOT and DHS have not defined their respective roles, responsibilities, and authorities or what agreed-upon outcome would satisfy the presidential directive. As a result, DOT and DHS cannot ensure that they will satisfy mutual responsibilities. Clearly delineating roles and responsibilities and agreed-upon outcomes and documenting these agreements would allow the agencies to address many of the uncertainties regarding fulfillment of their NSPD-39 backup-capabilities requirement, such as which agency is responsible for various key tasks, what role SSAs and industry should have, how NSPD-39 fits into the NIPP risk management framework, whether NSPD-39 is outdated, and others. To ensure that the increasing risks of GPS disruptions to the nation’s critical infrastructure are effectively managed, we recommend that the Secretary of Homeland Security take the following two actions: Increase the reliability and usefulness of the GPS risk assessment by developing a plan and time frame to collect relevant threat, vulnerability, and consequence data for the various critical infrastructure sectors, and periodically review the readiness of data to conduct a more data-driven risk assessment while ensuring that DHS’s assessment approach is more consistent with the NIPP. As part of current critical infrastructure protection planning with SSAs and sector partners, develop and issue a plan and metrics to measure the effectiveness of GPS risk mitigation efforts on critical infrastructure resiliency. To improve collaboration and address uncertainties in fulfilling the NSPD- 39 backup-capabilities requirement, we recommend that the Secretaries of Transportation and Homeland Security take the following action: Establish a formal, written agreement that details how the agencies plan to address their shared responsibility. This agreement should address uncertainties, including clarifying and defining DOT’s and DHS’s respective roles, responsibilities, and authorities; establishing clear, agreed-upon outcomes; establishing how the agencies will monitor and report on progress toward those outcomes; and setting forth the agencies’ plans for examining relevant issues, such as the roles of SSAs and industry, how NSPD-39 fits into the NIPP risk management framework, whether an update to the NSPD-39 is needed, or other issues as deemed necessary by the agencies. We provided a draft of this report to the Departments of Homeland Security, Transportation, and Commerce for their review and comment. DHS provided written comments (reprinted in app. II) and technical comments, which we incorporated as appropriate. DOT provided informal comments summarized below, and technical comments, which we incorporated as appropriate. Commerce had no comments. In written comments, DHS concurred with two of our recommendations and noted activities that it will undertake to address those recommendations. In particular, DHS concurred with our recommendation to develop and issue a plan and metrics to measure the effectiveness of GPS risk mitigation efforts, and our recommendation that DHS and DOT establish a formal written agreement that details how the agencies plan to address their shared responsibility. However, DHS did not concur with our recommendation related to increasing the reliability and usefulness of the GPS risk assessment and expressed concern about our evaluation of the NRE. DHS stated that it did not agree with this recommendation because DHS officials and subject matter experts believe the existing NRE analysis has sufficiently characterized the risk environment, and that our characterization of the NRE’s incorporation of best practices is inaccurate. Specifically, DHS disagreed with our analysis about the extent to which the NRE met NIPP criteria that risk assessments be complete, reproducible, defensible, and documented and provided reasons for its disagreement. For example, regarding our analysis of the NRE’s incompleteness, DHS stated that the NIPP does not require that a risk assessment consider all, or even a minimum number of, critical infrastructure sectors to be complete. Rather, DHS noted, the NIPP states that the risk assessment methodology must assess consequence, vulnerability, and threat for every defined risk scenario. Regarding our analysis that the NRE was not being widely used, DHS noted that we do not reference a second, concurrent report directed at mitigation of GPS risks. DHS stated that the NRE was, by design, meant to primarily support the National Executive Committee for Space-Based PNT’s high-level, interagency policy role, and that the committee and its staff had provided positive feedback. Based on its reasons for non-concurrence, DHS requested that we consider this recommendation resolved and closed. We disagree with DHS’s assertion that our characterization of the NRE is inaccurate. We have added additional text to clarify that based on the NIPP criteria we determined, overall, that the NRE was incomplete because each aspect of the NRE’s risk assessment—threat, vulnerability, and consequence—was incomplete. Regarding our analysis that the NRE was not reproducible, we found that the NRE does not conform to the NIPP because it is based entirely on subjective judgments of panelists. If different panelists were chosen, the results might have been different. Subject matter experts we interviewed told us they were skeptical about the quality of the panel deliberations, and characterized the members’ judgments as “educated guesses.” Regarding if the results were defensible, we continue to believe that potentially useful statistical techniques are only as valid as the underlying data, and that a core problem of the NRE methodology was that it did not document how the panel experts were chosen; the opinions of those experts were the basis for virtually all the data in the NRE. For example, at a minimum the quality of DHS’s panel selection would have been more transparent to the independent reviewer (as well as the participants) if DHS detailed exactly what sector and GPS expertise were required for each panel and how well the participating panelists covered these areas of expertise. After DHS officials told us that they had little documentary support for the NRE, we narrowed our request and asked DHS officials to defend and provide support for some of their key conclusions, but they did not provide it. Several industry and federal representatives we interviewed questioned whether the panels had sufficiently broad expertise to capture the full scope of GPS vulnerabilities within sectors. Regarding documentation, as we reported, the NRE did include some elements of documentation that were consistent with the NIPP. However, DHS stated that with limited data, its methodology depended on the expert judgment of the NRE panels. Thus, as previously noted, documenting the rigor of the panel selection process was crucial to the validity of the NRE. Nevertheless, DHS did not provide documentation, either in the NRE or in subsequent information requests, on how the subject matter experts were selected. This and other documentation issues, such as not fully reporting the underlying data supporting the risk calculations, also affect the NRE’s reproducibility and defensibility. Regarding our point that the NRE has not been widely used to inform risk mitigation priorities, DHS commented that we fail to mention that the National Executive Committee also requested a mitigation assessment. The mitigation study was discussed in our risk mitigation section of the report, and we have included additional information on the study. However, since the studies were done concurrently, the mitigation study was not informed by the NRE. Among other things, the report identifies best practices to mitigate risk to GPS receivers rather than using the NRE to develop a mitigation plan to reduce the risks the NRE identified and guide resource allocation, as required by the NIPP. Regarding the intended use of the NRE, the NCO told us that the study was intended to help inform senior government officials about risk associated with GPS use, not just the National Executive Committee or NCO. We have added language to clarify that NCO officials stated that they and the National Executive Committee were generally satisfied with the NRE. However, as we noted in the report, the NRE was distributed to other agencies and TSA officials told us that they are not using the NRE and did not see the relevance to TSA, and officials from the Departments of Defense and Energy told us that the NRE was not helpful in managing GPS risks. DHS commented that data on GPS risk factors have not improved and in its technical comments DHS noted that it has commissioned a study to obtain better data. However, while we recognize that obtaining better data is a challenge, we continue to believe that DHS should increase the reliability and usefulness of the GPS risk assessment by developing a plan and time frame to collect relevant threat, vulnerability, and consequence data for the various critical infrastructure sectors, and periodically review the readiness of data to conduct a more data-driven risk assessment while ensuring that DHS’s assessment approach is more consistent with the NIPP. For example, DHS could use the classified threat assessment that was completed too late to be included in the NRE, and it could proactively acquire and use private sector threat assessments. We believe such actions will help DHS develop a more rigorous, reliable assessment to inform risk mitigation planning and resource allocation. Consistent with our recommendation, DHS has initiated an effort to survey and better understand the vulnerabilities of critical infrastructure sectors. In May 2013, DHS awarded funding to four companies to conduct a detailed survey report of existing civilian GPS receiver use within two critical infrastructure sectors, among other things. A later phase of this effort, according to DHS documentation, is to explore other sectors. This is a good first step toward gathering the kind of information DHS needs to conduct more data-driven risk assessments in the future. The National Executive Committee’s 5-year plan recommends that DHS institute a risk management approach to assess threats, vulnerabilities, and potential consequences to interference to GPS signals and examine the best opportunities to mitigate those risks. Because of the shortcomings we found in the NRE, we do not believe that DHS has instituted an adequate risk management approach to address the risks associated with GPS interference. Although DHS requested that we consider this recommendation resolved and closed, we disagree and believe that our recommendation is still needed to ensure that DHS develops a plan to gather the data required for risk assessment and risk management. In providing comments on the draft report, DOT declined to take a position on the recommendations but agreed to consider our recommendation to improve collaboration and address uncertainties in fulfilling the NSPD-39 backup-capabilities requirement. DOT stated that the agency has worked closely with DHS on PNT-related activities but it welcomed the opportunity to have agency roles clarified in a formal, written agreement. DOT also reiterated that the agency’s views are consistent with the National PNT Architecture report’s “greater common denominator” strategy described in this report. DOT noted that GPS dependency and the ability to handle a GPS disruption are not well understood and will not be well understood until there is a “real-world” incident or test scenario to evaluate. DOT also noted that the recently formed Interagency IDM/Alternative PNT task force needs to expand its scope beyond monitoring activities. We are sending copies of this report to the Secretary of Homeland Security, the Secretary of Transportation, the Secretary of Commerce and interested congressional committees. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Mark Goldstein at (202) 512-2834 or goldsteinm@gao.gov, or Joseph Kirschbaum at (202) 512-9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. We reviewed (1) the extent to which the Department of Homeland Security (DHS) has assessed the risks of Global Positioning System (GPS) disruptions and their potential effects on the nation’s critical infrastructure, (2) the extent to which the Department of Transportation (DOT) and DHS have planned or developed backup capabilities or other strategies to mitigate the effects of GPS disruptions, and (3) what strategies, if any, selected critical infrastructure sectors employ to mitigate the effects of GPS disruptions, and any remaining challenges they face. We focused on civilian uses of GPS and on the following four critical infrastructure sectors: communications, energy, financial services, and transportation systems. We focused on civilian, as opposed to military, uses of GPS because the majority of GPS applications and users are civilian. We selected these sectors because of their dependence on GPS, interdependence with other sectors, inclusion in DHS’s GPS National Risk Estimate (NRE), and designation as critical sectors. To address these issues, we interviewed or obtained written comments from federal and state government officials, industry representatives, and GPS subject matter experts. Specifically, we contacted government officials from agencies involved in GPS governance, such as the Department of Defense (DOD), DOT, and DHS. To obtain views from state government officials, we contacted members of the U.S. States & Local Government Subcommittee of the Civil GPS Service Interface Committee, which is a forum established by DOT to exchange information about GPS with the civilian user community. In selecting these members, we asked the chair of the Subcommittee to identify a list of potential state government officials, and we ensured the officials represented a variety of states, geographical locations, and GPS uses. We also contacted the Sector-Specific Agency (SSA) for each of the sectors we studied, as follows: DHS’s Office of Cybersecurity and Communications (CS&C) for the communications sector, the Department of Energy (DOE) for the energy sector, the Department of the Treasury for the financial services sector, and DOT, the Transportation Security Administration (TSA), and the U.S. Coast Guard for the transportation systems sector. To obtain views from industry representatives, we contacted the Sector Coordinating Council (SCC) for each of the sectors we studied and selected industry participants to interview based on input from a designated spokesperson for each SCC. For the energy and transportation systems sectors, we contacted each sub-sector, although not all sub-sectors participated or provided us with written responses, as shown in table 1. Industry representatives from the financial services sector declined to respond to our requests for information. Additionally, we contacted various GPS subject matter experts, including members of the National Space-Based Positioning, Navigation, and Timing (PNT) Advisory Board (Advisory Board), which is a federal advisory committee that provides independent advice to the U.S. government on GPS matters. We requested that our Advisory Board liaison invite all members to participate, and members participated based on their availability. Views expressed by members of the Advisory Board do not necessarily represent the official position of the Board as a whole. We also attended a formal meeting of the Advisory Board in May 2013. In selecting experts to contact, we considered relevant published literature; their experience as reflected in publications, testimonies, positions held, and their biographies; recommendations from the Institute of Navigation (a non- profit professional society dedicated to PNT); and other stakeholders’ recommendations. See table 1 for a list of the stakeholders we contacted. To review the extent to which DHS has assessed the risks of GPS disruptions and their potential effects on the nation’s critical infrastructure, we compared DHS’s efforts to established risk assessment criteria and contacted GPS stakeholders. Specifically, as the centerpiece of DHS’s GPS risk assessment efforts, we reviewed DHS’s 2012 GPS NRE and compared it to the risk assessment criteria established in the National Infrastructure Protection Plan (NIPP), originally issued by DHS in 2006 and updated in 2009. To learn more about the NRE’s scope, methodology, and conduct, we interviewed the DHS officials responsible for authoring the NRE and reviewed related documentation. We also reviewed the DHS commissioned study that was requested in conjunction with the NRE. Additionally, we reviewed other assessments that consider GPS risks—including threat, vulnerability, and consequence—from DHS and others. For example, documentation we reviewed included DOT’s 2001 Vulnerability Assessment of the Transportation Infrastructure Relying on the GPS, the Homeland Security Institute’s 2005 GPS Vulnerability Assessment, MITRE’s 2010 Coast Guard C4IT GPS Vulnerabilities Assessment, and the North American Electric Reliability Corporation’s 2012 Extended Loss of GPS Impact on Reliability whitepaper, among others. Additionally, we interviewed or obtained written responses from the government officials, industry representatives, and GPS subject matter experts identified in table 2 to obtain their views on the NRE and to assess whether the NRE is being used to inform sector risk management efforts. To review the extent to which DOT and DHS have planned or developed backup capabilities or other strategies to mitigate the effects of GPS disruptions, we contacted GPS stakeholders, examined agency documentation, and reviewed relevant federal policies and directives. Specifically, we interviewed DOT and DHS officials as identified in table 2. We also reviewed documentation from these agencies on the efforts they have undertaken. For example, DHS documentation we reviewed included materials related to IDM efforts and the draft 2013 Interagency Memorandum of Agreement with Respect to Support to Users of the Navstar GPS, among others. DOT documentation we reviewed included the 2006 National PNT Architecture Terms of Reference, the 2010 National PNT Architecture Implementation Plan and the 2008 DOD National PNT Architecture Study Final Report, the 2008 Memorandum of Agreement between DOD and DOT on Civil Use of the GPS, and documentation related to FAA’s Alternative PNT initiative, among others. We also reviewed other key documentation related to GPS, such as the 2012 Federal Radionavigation Plan. We compared this information to NSPD-39 and also reviewed other relevant policies, such as the President’s 2010 National Space Policy of the U.S.A. We also interviewed or obtained written responses from the government officials, industry representatives, and GPS subject matter experts identified in table 2 to obtain their views on DOT and DHS’s efforts or for context sophistication. For example, we interviewed the NCO and reviewed meeting minutes from the National Executive Committee for Space-Based PNT and its Executive Steering Group and reviewed its National Five-Year Plan for Space-Based PNT for Fiscal Years 2009-2013. Additionally, we compared DOT and DHS’s efforts against our criteria on key elements of effective collaboration. To review what strategies, if any, selected critical infrastructure sectors employ to mitigate the effects of GPS disruption, and any remaining challenges they face, we contacted GPS stakeholders identified in table 2 and reviewed relevant reports and whitepapers from these entities. We also interviewed the SSAs for each sector, as described above and identified in table 2, and reviewed the Sector-Specific Plans for each sector to assess if GPS is addressed. We reviewed the NIPP risk management framework for guidance on measuring the effectiveness of sector risk mitigation efforts. Additionally, we reviewed literature and presentations from academia, the Space Weather Prediction Center within NOAA’s National Weather Service, and other government agencies, GPS subject matter experts and research institutions. We received Coast Guard data on the number of GPS incidents reported to NAVCEN. We did not assess the reliability of these data because they did not materially affect our findings, conclusions, or recommendations. We also interviewed or obtained written responses from the government officials, industry representatives, and GPS subject matter experts identified in table 2 to obtain their views on sector mitigation efforts and factors that affect sector mitigation efforts. We conducted this review from November 2012 to November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individuals named above, Sally Moino and Glenn Davis, Assistant Directors; Eli Albagli; Melissa Bodeau; Katherine Davis; Richard Hung; Bert Japikse; SaraAnn Moessbauer; Josh Ormond; Nalylee Padilla; and Daniel Rodriguez made key contributions to this report.
GPS provides positioning, navigation, and timing data to users worldwide and is used extensively in many of the nation's 16 critical infrastructure sectors, such as communications and transportation. GPS is also a key component in many of the modern conveniences that people rely on or interact with daily. However, sectors' increasing dependency on GPS leaves them potentially vulnerable to disruptions. GAO was asked to review the effects of GPS disruptions on the nation's critical infrastructure. GAO examined (1) the extent to which DHS has assessed the risks and potential effects of GPS disruptions on critical infrastructure, (2) the extent to which DOT and DHS have developed backup strategies to mitigate GPS disruptions, and (3) what strategies, if any, selected critical infrastructure sectors employ to mitigate GPS disruptions and any remaining challenges. GAO reviewed documents, compared them to relevant federal guidance, and interviewed representatives and experts from federal and state governments, industry, and academia. The focus of this review was on civilian GPS uses within four critical infrastructure sectors. To assess the risks and potential effects from disruptions in the Global Positioning System (GPS) on critical infrastructure, the Department of Homeland Security (DHS) published the GPS National Risk Estimate (NRE) in 2012. In doing so, DHS conducted a scenario-based risk assessment for four critical infrastructure sectors using subject matter experts from inside and outside of government. Risk assessments involve complex analysis, and conducting a risk assessment across multiple sectors with many unknowns and little data is challenging. DHS's risk management guidance can be used to help address such challenges. However, we found the NRE lacks key characteristics of risk assessments outlined in DHS's risk management guidance and, as a result, is incomplete and has limited usefulness to inform mitigation planning, priorities, and resource allocation. A plan to collect and assess additional data and subsequent efforts to ensure that the risk assessment is consistent with DHS guidance would contribute to more effective GPS risk management. A 2004 presidential directive requires the Department of Transportation (DOT), in coordination with DHS, to develop backup capabilities to mitigate GPS disruptions, and the agencies have initiated a variety of efforts that contribute to fulfilling the directive. For example, DOT is researching GPS alternatives for aviation, and DHS began efforts on GPS interference detection and mitigation and is researching possibilities for a nationwide backup to GPS timing, which is used widely in critical infrastructure. However, due to resource constraints and other reasons, the agencies have made limited progress in meeting the directive, and many tasks remain incomplete, including identifying GPS backup requirements and determining suitability of backup capabilities. Furthermore, the agencies' efforts have been hampered by a lack of effective collaboration. In particular, DOT and DHS have not clearly defined their respective roles, responsibilities, and authorities or what outcomes would satisfy the presidential directive. Without clearly defining both roles and desired outcomes, DOT and DHS cannot ensure that they will satisfy mutual responsibilities. Implementing key elements of effective collaboration would allow the agencies to address many uncertainties regarding fulfillment of their presidential policy directive. Selected critical infrastructure sectors employ various strategies to mitigate GPS disruptions. For example, some sectors can rely on timing capabilities from other sources of precise time in the event of GPS signal loss. However, both the NRE and stakeholders we interviewed raised concerns about the sufficiency of the sectors' mitigation strategies. Federal risk management guidance requires DHS to work with federal agencies and critical infrastructure sector partners to measure the nation's ability to reduce risks to critical infrastructure by using a process that includes metrics. We found that DHS has not measured the effectiveness of sector mitigation efforts to GPS disruptions and that, as a result, DHS cannot ensure that the sectors could sustain essential operations during GPS disruptions. The lack of agreed-upon metrics to measure the effectiveness of sector mitigation efforts hinders DHS's ability to objectively assess improvements, track progress, establish accountability, provide feedback mechanisms, or inform decision makers about the appropriateness of the mitigation activities. DHS should ensure that its GPS risk assessment approach is consistent with DHS guidance; develop a plan to measure the effectiveness of mitigation efforts; and DOT and DHS should improve collaboration. DHS concurred with the latter two recommendations but did not concur with the first. GAO continues to believe that improving the risk assessment approach will capitalize on progress DHS has made and will improve future efforts.
We conducted our work for GAO-09-879 from September 2008 to September 2009 and updated the analysis of the number and duration of continuing resolutions from February to March 2013. until agreement is reached on final appropriations, they create uncertainty for agencies about both when they will receive their final appropriation and what level of funding ultimately will be available. The effects of CRs on federal agencies differ in part on the duration and number of CRs. As the examples in my statement will illustrate, shorter and more numerous CRs can lead to repetitive work. Longer-term CRs allowed for better planning in the near term, however, operating under the level of funding and other restrictions in the CR for a prolonged period also limited agencies’ decision-making options and made tradeoffs more difficult. As shown in figure 1, the duration and number of CRs has varied greatly between fiscal years 1999-2013, ranging from 1 to 197 days. The number of CRs enacted in each year also varied considerably ranging from 2 to 21, excluding the current fiscal year. The effects of CRs also vary by agency and program. Not all federal agencies, for example, are under CRs for the same amount of time. In our 2009 report we found that agencies covered by the Defense, Military Construction, and Homeland Security Appropriations Subcommittees operated under CRs for about 1 month on average during fiscal years 1999-2009, whereas other agencies operated under CRs for at least 2 months on average. More recently, for fiscal year 2013, all federal agencies are operating under a CR scheduled to expire on March 27, 2013. Congress includes provisions applicable to the funding of most agencies and programs under a CR. These provisions provide direction regarding the availability of funding within a CR and demonstrate the temporary nature of the legislation. For example, one standard provision provides for an amount to be available to continue operations at a designated rate of operations. Since fiscal year 1999, different formulas have been enacted for determining the rate for operations during the CR period. The amount often is based on the prior fiscal year’s funding level or the “current rate” but may also be based on a bill that has passed either the House or Senate. Depending on the language of the CR, different agencies may operate under different rates. The amount is available until a specified date or until the agency’s regular appropriations act is enacted, whichever is sooner. In general, CRs prohibit new activities and projects for which appropriations, funds, or other authority were not available in the prior year. Also, so the agency action does not impinge upon final funding prerogatives, agencies are directed to take only the most limited funding actions and CRs limit the ability of an agency to obligate all, or a large share of its available appropriation during the CR. In 2007, Congress enacted the furlough provision in the CR for the first time. This provision permits OMB and other authorized government officials to apportion, or distribute amounts available for obligation, up to the full amount of the rate for operations to avoid a furlough of civilian employees. This authority may not be used until after an agency has taken all necessary action to defer or reduce nonpersonnel-related administrative expenses. Recognizing the constraints inherent in a CR, Congress has at times provided flexibility for certain programs and initiatives through the use of legislative anomalies, which provide funding and authorities different from the standard CR provisions. While uncommon, the majority of the anomalies provided either (1) a different amount than that provided by the standard rate of operations or (2) an extension of expiring program authority. In some cases, CRs provide full-year appropriations for a program or activity, to help agencies manage funds. For example, in fiscal year 2009, the CR appropriated an amount to cover the entire year for Low Income Home Energy Assistance Program (LIHEAP) payments. LIHEAP provides assistance for low-income families in meeting their home energy needs and typically 90 percent of LIHEAP funding is In addition to obligated in the first quarter to cover winter heating costs.the anomalies, multiyear appropriations and advance appropriations can help agencies manage the effects of CRs. For example, agency officials stated that multiyear appropriations, which provide the authority to carry over funds into the next fiscal year, can be helpful in years with lengthy CRs because there is less pressure to obligate all of their funds before the end of the fiscal year, thus reducing the incentive to spend funds on lower priority items that can be procured more quickly. Case study agency officials contacted for our 2009 report said that, absent a CR, they would have hired additional staff sooner for government services such as grant processing and oversight, food and drug inspections, intelligence analysis, prison security, claims processing for veterans’ benefits, or general administrative tasks, such as financial management and budget execution. While agency officials said that it was difficult to quantify the effect that hiring delays related to CRs had on specific agency activities given the number of variables involved, agencies provided examples that illustrated the potential adverse effects including: An FDA official said that deferring the hiring and training of staff during a CR affected the agency’s ability to conduct the targeted number of inspections negotiated with FDA’s product centers in areas such as food and medical devices and that routine surveillance activities (e.g., inspections, sample collections, field examinations, etc.) were some of the first to be affected. BOP officials said that deferring hiring during CRs had made it difficult for BOP to maintain the ratio of corrections officers to inmates as the prison population increased. VBA officials cited missed opportunities in processing additional benefits claims and completing other tasks. Because newly hired claims processors require as much as 24 months of training to reach full performance, a VBA official said that the effects of hiring delays related to CRs were not immediate, but reduced service delivery in subsequent years. Several case study agencies also reported delaying contracts during the CR period, which could reduce the level of services agencies provided and increased costs. For example, BOP reported delaying the activation of its Butner and Tucson Prison facilities and two other federal prisons in 2007 during the CR period to make $65.6 million available for more immediate needs. According to BOP, these delays in the availability of additional prison capacity occurred at a time when prison facilities were already overcrowded. BOP officials also said that delaying contract awards for new BOP prisons and renovations to existing facilities prevented the agency from locking in prices and resulted in higher construction costs and increases in the cost of supplies. Based on numbers provided by BOP, a delay in awarding a contract for the McDowell Prison Facility resulted in about $5.4 million in additional costs. In some instances, delaying contracts resulted in additional costs in terms of time and resources. For example, officials from BOP, VHA, and VBA said that they sometimes had to solicit bids a second time or have environmental, architectural, or engineering analyses redone. Some agency officials said that contracting delays resulting from longer CRs also affected their ability to fully compete and award contracts in the limited time remaining in the fiscal year after the agency had received its regular appropriation. VHA and ACF reported that the application time available for discretionary grants may also be compressed by a longer CR. Further, VA stated that this compressed application time adversely affected the quality of submitted applications. Similarly, BOP’s Field Acquisition Office, which is responsible for acquisitions over $100,000, said that trying to complete all of its contracts by the end of the fiscal year when a CR lasts longer than 3 to 4 months negatively affects the quality of competition. According to some representatives of nonprofit organizations and state and local governments, federal grant recipients could temporarily support programs with funds from other sources until agencies’ regular appropriations are passed; however, it was more difficult to do so during periods of economic downturn such as the one they recently experienced. An ACF official told us that nonprofit organizations providing shelter to unaccompanied alien children have used lines of credit to bridge gaps in federal funding during a CR. However, in March 2009, a shelter in Texas informed ACF’s Office of Refugee Resettlement that its credit was at its limit and it was in immediate need of additional funds to sustain operations for the next 45 to 60 days. The Office of Refugee Resettlement made an emergency grant to this organization to maintain operations with the CR funding remaining. Case study agencies reported that they continued to feel the effects of the delays caused by CRs even after the agencies had received their full year appropriations. In general, longer CRs can make it more difficult to implement unexpected changes in agencies’ regular appropriations, because agencies have a limited time to do so. In addition, longer CRs can contribute to distortions in agencies’ spending as agencies rush to obligate funds late in the fiscal year. For example, agency officials said that if hiring was delayed during the CR period, it was particularly difficult to fill positions by the end of the fiscal year after a longer CR period. Agency officials said that if the agency does not have enough time to spend its funding on high-priority needs (such as hiring new staff) because of a lengthy CR, the agency ultimately may spend funds on a lower priority item that can be procured quickly. In addition to delays, all case study agencies reported having to perform additional work to manage within the constraints of the CR—potentially resulting in hundreds of hours of lost productivity. The most common type of additional work that agencies reported was having to enter into shorter term contracts or grants multiple times to reflect the duration of the CR. Agencies often made contract or grant awards monthly or in direct proportion to the amount and timing of funds provided by the CR. In other words, if a CR lasted 30 days, an agency would award a 30-day contract for goods or services. Then, each time legislation extended the CR, the agency would enter into another short-term contract to make use of the newly available funding. In 2009, agencies reported that the time needed for these tasks may be minimal and vary depending on the complexity of a contract or grant, but the time spent is meaningful when multiplied across VHA’s 153 medical facilities and roughly 800 clinics, FBI’s 56 field offices, BOP’s 115 institutions, and the thousands of grants and contracts awarded by our case study agencies. For example, at the time of our study, VHA estimated that it awarded 20,000 to 30,000 contracts a year; ACF’s Head Start program awarded grants to over 1,600 different recipients each year; and FBI placed over 7,500 different purchase orders a year. While none of the agencies reported tracking these costs, VHA estimated that a 1-month CR resulted in over $1 million in lost productivity at VA medical facilities and over $140,000 in additional work for the agency’s central contracting office. These estimates were based on agency officials’ rough approximations of the hours spent on specific activities related to CRs multiplied by average cost of the salary of the federal employee performing the task. This time estimate does not include the additional work required to issue multiple grants. activities related to managing during the CR such as weekly planning meetings and monitoring agency resources and requisitions. In general, numerous shorter CRs led to more repetitive work for agencies managing contracts than longer CRs. Numerous shorter CRs were particularly challenging for agencies, such as VHA and BOP, that have to maintain an inventory of food, medicine, and other essential supplies and could result in increased costs. For example, absent a CR, BOP officials said that prison facilities routinely contracted for a 60- to 90- day supply of food. In addition to reducing work, this allowed the prison facilities to negotiate better terms in delivery order contracts by taking advantage of economies of scale. However, under shorter CRs, these facilities generally limited their purchases to correspond with the length and funding provided by the CR. Thus, the prison made smaller, more frequent purchases, which BOP officials said could result in increased costs. Agency officials told us they took various actions to manage inefficiencies resulting from CRs, including delays and increased workload. For example, to avoid the types of hiring delays often associated with a CR, during the CR period in 2009 FBI proceeded with its hiring activities based on a staffing plan supported by the President’s Budget. This helped FBI avoid a backlog in hiring later in the year and cumulatively over time, but the agency assumed some risk because it could have received a regular appropriation that did not support the hiring plan it had implemented. Had this happened, FBI officials stated that FBI likely would have had to suspend hiring for the remainder of the fiscal year and make difficult cuts to other nonpersonnel expenses. To reduce the amount of additional work required to modify contracts and award grants in multiple installments, ACF and FDA reported shifting contract and grant cycles to later in the fiscal year. An agency’s ability to shift its contract cycle depends on a number of factors, including the type of services being acquired.annual contracts for severable services, such as recurring janitorial services, are executed in the third and fourth quarters of the fiscal year when agencies are less likely to be operating under a CR. Further, FBI reported it generally entered into contracts based on the rate for operations for the period covered by the CR. Previously, each time Congress extended a CR, FBI renewed its contracts to make use of the additional funds that became available, and FBI’s Finance Division provided a requisition for the renewal. Under FBI’s new streamlined process, the Finance Division committed enough funds to cover a full- year contract at the beginning of the fiscal year. An agency can shift its contract cycle so that To reduce the administrative work required to subdivide funds from each CR to different offices, programs, or both, VBA and VHA reported that they did not allot specific dollar amounts during a CR but rather provided guidance that all offices operate at a certain percentage of the previous year’s appropriations. According to agency officials, this provides the agency with more flexibility during the CR period and reduces the workload associated with changes in funding levels. VHA officials said that this also allows each facility to manage its funds to meet priorities identified at the local level. We have not reviewed agency operations under CRs since we issued our 2009 report. However, studies issued after our report was released have highlighted similar themes. This concludes my statement for the record. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Congress annually faces difficult decisions on what to fund among competing priorities and interests with available resources. Continuing resolutions (CRs) can create budget uncertainty, complicating agency operations and causing inefficiencies. In all but 3 of the last 30 years, Congress has passed CRs to provide funding for agencies to continue operating until agreement is reached on final appropriations. GAO was asked to provide a statement based on findings from its 2009 report on managing under CRs ( GAO-09-879 ). This statement focuses on (1) a history of CRs and the provisions that Congress includes within them and (2) the effects of CRs on agency operations and actions that federal agencies have taken to manage these effects. GAO's 2009 report reviewed six federal agencies within three cabinet-level departments selected based on factors such as the length of time spent managing under CRs and the types of services they provided. These six case study agencies were the Administration for Children and Families and the Food and Drug Administration within the Department of Health and Human Services; Veterans Health Administration and Veterans Benefits Administration within the Department of Veterans Affairs; and Bureau of Prisons and Federal Bureau of Investigation within the Department of Justice. Under CRs that provide funding for the remainder of a fiscal year, agencies obtain certainty about funding. Therefore, CRs that spanned the months remaining in a fiscal year were not the focus of GAO's report. GAO did not make recommendations in the 2009 report. Because CRs only provide funding until agreement is reached on final appropriations, they create uncertainty for agencies about both when they will receive their final appropriation and what level of funding ultimately will be available. Effects of CRs on federal agencies differ based in part on the duration and number of CRs and may vary by agency and program. CRs include provisions that prohibit agencies from beginning new activities and projects and direct agencies to take only the most limited funding actions. Congress can provide flexibility for certain programs and initiatives through the use of legislative anomalies, which provide funding and authorities different from the standard CR provisions. Officials from all six case study agencies reported that they delayed hiring or contracts during the CR period, potentially reducing the level of services agencies provided and increasing costs. After operating under CRs for a prolonged time, agencies faced additional challenges executing their final budget as they rushed to spend funds in a compressed timeframe. All case study agencies reported performing additional work to manage within CR constraints, such as issuing shorter term grants and contracts multiple times. Agency officials reported taking varied actions to manage inefficiencies resulting from CRs, including shifting contract and grant cycles to later in the fiscal year to avoid repetitive work, and providing guidance on spending rather than allotting specific dollar amounts during CRs to provide more flexibility and reduce the workload associated with changes in funding levels.
Our prior work highlights some of the challenges VA faces in formulating its budget. As we reported in 2006, these challenges include making realistic assumptions about the budgetary impact of some of its policies, making accurate calculations, and obtaining sufficient data for useful budget projections. In 2009, we again reported on VA’s budget formulation challenges—specifically, VA’s challenges projecting the amount of long- term care it will provide and estimating the costs of this care. Our 2006 report on VA’s overall health care budget illustrated that in formulating its budget, VA faces challenges making realistic assumptions about the budgetary impact of its proposed policies. We reported that the President’s requests for additional funding for VA’s medical programs for fiscal years 2005 and 2006 were in part due to unrealistic assumptions VA made about how quickly the department would realize savings from proposed changes in its nursing home policy. Specifically, we found that: VA’s fiscal year 2005 budget justification included a proposal to reduce the amount of care VA provides—known as workload—in VA-operated nursing homes, one of three settings which provide VA nursing home services. VA assumed that savings from this reduction in workload would be realized on the first day of fiscal year 2005. VA officials later told us that this assumption had been unrealistic because of the accelerated time frame of the planned policy change. The change would have required transferring or discharging veterans from the nursing homes in an extremely compressed time frame; moreover, achievement of substantial savings from this policy would have also likely required reducing the number of VA employees. VA’s fiscal year 2006 budget justification included a policy proposal to reduce patient workload and costs by prioritizing the veterans who would receive a certain type of VA nursing home care. VA assumed that savings resulting from the policy change could be realized before the start of the 2006 fiscal year; however, VA officials said they later realized that time frame was unrealistic. In our 2006 report, we recommended that VA improve its budget formulation processes by explaining in its budget justifications the relationship between the implementation of proposed policy changes and the expected timing of cost savings to be achieved. VA agreed with this recommendation and acted on this recommendation in VA’s fiscal year 2009 budget justification. Our 2006 report also illustrated that VA faces challenges making accurate calculations during budget formulation. As we reported, VA made computation errors when estimating the effect of its proposed fiscal year 2006 nursing home policy, and this contributed to requests for supplemental funding that year. We found that VA underestimated workload and the costs of providing care in all three of its nursing home settings. VA officials said that the errors resulted from calculations being made in haste during the OMB appeal process, and that a more standardized approach to long-term care calculations could provide stronger quality assurance to help prevent future mistakes. In 2006, we recommended that VA strengthen its internal controls to better ensure the accuracy of calculations it uses in preparing budget requests. VA agreed with and implemented this recommendation and had the savings estimates from proposed policy changes in its fiscal year 2009 budget justification validated by an outside actuarial firm. In formulating its budget, VA also faces the challenge of obtaining sufficient data for useful workload projections, as illustrated in our 2006 report. We reported that the President’s requests for additional funding for VA health care programs in fiscal years 2005 and 2006 were, in part, due to the lack of sufficient data on how many OEF/OIF veterans VA would care for in those fiscal years. In its fiscal year 2005 budget justification, VA projected that it would need to provide care to about 23,500 returning OEF/OIF veterans. VA subsequently revised its projections to indicate that VA would serve nearly 100,000 OEF/OIF veterans. According to VA officials, the original projections for providing care to OEF/OIF veterans had been understated for fiscal year 2005 in part because the projections were based on insufficient data on veterans returning from Iraq and Afghanistan. Insufficient data on returning OEF/OIF veterans continued to be a challenge in formulating VA’s fiscal year 2006 budget justification. VA officials told us they did not have sufficient data for that fiscal year due to challenges obtaining data needed to identify these veterans from the Department of Defense (DOD). After the President submitted the fiscal year 2006 budget request, VA determined that it expected to provide care to approximately 87,000 more veterans than initially projected for fiscal year 2006. According to VA officials, the agency subsequently began receiving the DOD data it requires to identify OEF/OIF veterans on a monthly basis rather than the quarterly reports it used to receive. Our recent work on VA’s budget showed how VA continued to face challenges formulating its budget for long-term care services. In January 2009, we reported on VA’s challenges developing realistic assumptions to project the amount of noninstitutional long-term care services it would provide to veterans. We found that, in its fiscal year 2009 budget justification, VA included a spending estimate for noninstitutional long- term care services that appeared unreliable, in part because this spending estimate was based on a workload projection that appeared to be unrealistically high, given recent VA experience providing these services. Specifically, in an effort to help meet veterans’ demand for noninstitutional services, VA projected that it would increase its noninstitutional workload 38 percent from fiscal year 2008 to fiscal year 2009. VA included this projection in the budget despite the fact that from fiscal year 2006 to fiscal year 2007—the most recent year for which workload data are available—VA’s actual workload for these services decreased about 5 percent, rather than increasing as projected. (See fig. 1.) In its fiscal year 2009 budget justification, VA did not provide information regarding its plans for how it will increase noninstitutional workload 38 percent from fiscal year 2008 to fiscal year 2009. To strengthen the credibility of the estimates of long-term care spending in VA’s budgeting proposals and increase transparency for Congress and stakeholders, we recommended that in future budget justifications VA use workload projections for estimating noninstitutional long-term care spending that are consistent with VA’s recent experience or report the rationale for using projections that are not. In commenting on a draft of our report, VA did not indicate whether it agreed with this recommendation, but stated it would complete an action plan that responds to the recommendation by the end of March 2009. In addition to having difficulty developing reliable projections on the amount of long-term care services it will provide, VA also faces challenges developing realistic assumptions about the cost of providing these services when formulating its budget. In January 2009, we reported that VA may have underestimated its nursing home spending for fiscal year 2009 because it used a cost assumption that appeared unrealistically low, given both recent VA experience and economic forecasts of increases in health care costs. To formulate its nursing home spending estimate, VA assumed that the cost of providing a day of nursing home care would increase 2.5 percent from fiscal year 2008 to fiscal year 2009. However, from fiscal year 2006 to fiscal year 2007—the most recent year for which actual cost data are available—the cost to provide this care increased approximately 5.5 percent. Similarly, for fiscal year 2007 to fiscal year 2008, VA estimated that its nursing home costs would increase approximately 11 percent. In addition to its recent experience, VA’s 2.5 percent cost increase is also less than the rate provided in OMB guidance to VA to help with its budget estimates—which forecasted a rate of inflation for medical services of 3.8 percent for the same time period. In our January 2009 report, we also found that VA’s estimate of the amount it would spend for noninstitutional long-term care services in fiscal year 2009 appeared to be unreliable—in part because VA based this estimate on a cost assumption that appeared unrealistically low, when compared to VA’s recent experience and to economic forecasts of increases in health care costs. Specifically, VA assumed that the cost of providing a day of noninstitutional long-term care would not increase from its fiscal year 2008 level. VA used this assumption to formulate its noninstitutional long- term care spending estimate despite the fact that from fiscal year 2006 to fiscal year 2007—the most recent year for which actual cost data are available—the cost of providing these services increased 19 percent. VA’s cost assumption is also inconsistent with the OMB guidance provided to VA. In its fiscal year 2009 budget justification, VA did not provide information regarding its nursing home or noninstitutional cost assumptions. However, VA officials told us that they made these assumptions in order to be conservative in VA’s fiscal year 2009 budget estimates. To strengthen the credibility of the estimates of long-term care spending in VA’s budgeting proposals and increase transparency for Congress and stakeholders, we recommended that VA, in future budget justifications, use cost assumptions for estimating both nursing home and noninstitutional long-term care spending that are consistent with VA’s recent experience or report the rationale for using cost assumptions that are not. In commenting on a draft of our report, VA did not indicate whether it agreed with these recommendations, but stated it would complete an action plan that responds to the recommendations, again by the end of March 2009. Our prior work highlights some of the challenges VA faces in executing its health care budget. These challenges include spending and tracking funds designated by VA for specific health care initiatives as well as providing timely and useful information to Congress regarding budget execution progress and problems. After formulating its estimates of likely spending on its health care services, VA is also responsible for executing its budget efficiently and effectively. However, our 2006 report on VA’s funding for specific mental health initiatives showed that in executing its budget, VA faces challenges spending and tracking the use of funds designated by VA for specific VA health care initiatives, in particular funds that VA intends to use to expand services to improve access to care for its veteran population. For example, in 2006, we reported that in fiscal years 2005 and 2006, VA had difficulty spending and tracking funds it had designated for new initiatives included in VA’s mental health strategic plan, which were to expand mental health services in order to address gaps previously identified by VA. These initiatives—which were to be funded by $100 million in fiscal year 2005 and $200 million in fiscal year 2006—were intended to enhance VA’s larger mental health program. In both fiscal years, VA allocated funds to VA medical centers and offices that were to be used for mental health strategic plan initiatives during those fiscal years, as part of VA’s efforts to expand these services. As we reported in 2006, VA faced challenges in both spending the funds and tracking their use in fiscal years 2005 and 2006: Challenges in spending funds—We found that, by the end of fiscal years 2005 and 2006, some VA medical centers had not spent all of the funds they had received for mental health strategic plan initiatives for those fiscal years, according to VA medical center officials and other available information. In fiscal year 2005, this was due to factors such as the length of time it took the medical centers to hire new staff and locate or renovate space for new mental health programs. Challenges in tracking the use of funds—In both fiscal years, VA did not have an adequate method in place for tracking spending for its new mental health strategic plan initiatives. VA did not track how funds allocated for plan initiatives were spent, and as a result, VA could not determine to what extent the funds for plan initiatives were spent on those initiatives. To provide information for improved management and oversight, we recommended that VA track the extent to which the funds allocated for mental health strategic plan initiatives are spent for those initiatives. Since we reported on this issue in November 2006, VA has implemented a tracking system to monitor spending on mental health strategic plan initiatives and help determine the extent to which funds allocated for mental health strategic plan initiatives are spent for those initiatives. Although VA took steps to address its challenges tracking its spending on mental health initiatives, our more recent work in 2009 shows how VA continues to face spending challenges when the department undertakes efforts to expand services for veterans. In January 2009 we reported that VA’s fiscal year 2009 budget justification included plans to increase VA’s spending on noninstitutional long-term care services, in order to partially close previously identified gaps in the provision of these services. VA assumed it would be able to increase its noninstitutional workload by 38 percent from fiscal year 2008 to fiscal year 2009. However, in our report we raised questions about VA’s ability to achieve this increase in workload. As we noted in our report, VA officials stated that increasing noninstitutional workload is challenging. Similar to VA’s prior mental health initiatives, many of VA’s noninstitutional services are provided by VA personnel, and as a result, VA must take the time to hire and train more personnel before it has the capacity to serve an increased workload. These factors suggest that VA may have difficulty spending its resources as planned. In its budget justification, VA did not explain how it plans to achieve this increase in noninstitutional workload. As VA executes its budget, VA also faces the challenge of providing timely information to Congress about the agency’s progress and any problems the agency encounters during this process. For example, in our 2006 report on VA’s overall health care budget, we reported that although VA staff had closely monitored its budget execution and identified problems for fiscal years 2005 and 2006, VA did not report this information to Congress in a timely manner. For example, anticipating challenges in managing within its resources, VA had closely monitored the fiscal year 2005 budget as early as October 2004. However, Congress did not learn of the budget challenges facing VA until April 2005. In addition, VA faces a challenge in providing information to Congress that would be useful for congressional oversight of VA’s budget. For example, in 2006, we also found that VA’s reporting of its budget execution progress and problems to Congress could have been more informative. In the appropriations act for fiscal year 2006, Congress included a requirement for VA to submit quarterly reports regarding the status of the medical programs budget during that fiscal year. In addition, the conference report accompanying the appropriations act directed VA to include waiting list performance measures, among other things. We found that VA did not include in its quarterly reports certain types of information that would have been useful for congressional oversight. For example, in its reports to Congress, VA used a patient workload measure that counted patients only once no matter how many times they used VA services within the fiscal year. This measure did not capture the difference between patients predominantly using low-cost services such as primary care outpatient visits and those using high-cost services such as acute inpatient hospital care. In contrast, VA provided in its reports to OMB other workload measures that provided a more complete picture of whether new patients were receiving low- or high-cost services. Some of those measures provided to OMB included a measure of one type of inpatient care— nursing home workload—and the number of outpatient visits. In addition, in one of its quarterly reports to Congress, VA reported access measures for existing VA patients—the percentage of primary care and percentage of specialty care appointments scheduled within 30 days of the desired date—where VA was exceeding its performance goals. However, VA did not provide one access measure identified in the conference report: the time required for new patients to get their first appointment. Although not the same measure, a similar measure VA produced for other purposes showed the number of new patients waiting for their first appointment to be scheduled. This measure showed that the number of new patients waiting for their first appointment to be scheduled almost doubled from April 2005 to March 2006, indicating a potential problem in the first quarter of fiscal year 2006. We recommended that VA improve its reporting of budget execution progress to Congress by incorporating measures of patient workload to capture the costliness of care and a measure of waiting times. These measures might help alert Congress to potential problems VA may face in managing within its budget in future years. VA implemented part of this recommendation in the quarterly report it submitted to Congress in May 2008, in which VA reported two measures related to waiting times. Although the inclusion of these measures in VA’s quarterly reports can help facilitate congressional oversight, VA could provide additional information to inform Congress about the costliness of VA care. Sound budget formulation, monitoring of budget execution, and the reporting of informative and timely information to Congress for oversight continue to be essential as VA addresses budget challenges we have identified in recent years. While the budget process inevitably involves imperfect information and uncertainty about future events, VA has the opportunity to improve the credibility of its budgeting process by continuing to address problems that we have identified in recent years. Doing so can increase the credibility and usefulness of information that VA provides to Congress and affected stakeholders on its annual budget plans and the progress it makes in spending appropriated funds as planned. This is particularly the case for long-term care services, where budget workload assumptions and cost projections, as highlighted by our work for several years, raise questions regarding the credibility and usefulness of projected spending estimates. In addition, our prior report on new VA mental health initiatives to address identified gaps in services may provide a cautionary lesson regarding the expansion of new VA health care programs more generally. Namely, that the availability of funding for new health care initiatives does not in itself mean that these initiatives will be fully implemented within a given fiscal year—in part because new initiatives can bring challenges in hiring and training new staff—or that monitoring and tracking of such funding will be adequate to report the extent to which new initiatives are being implemented as planned. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions you or other members of the Subcommittee may have. For more information regarding this testimony, please contact Randall B. Williamson at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition, James C. Musselwhite, Assistant Director; Deirdre Brown; Robin Burke; and Krister Friday made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs (VA) estimates it will provide health care to 5.8 million patients with appropriations of $41.2 billion in fiscal year 2009. The President has proposed an increase in VA's health care budget for fiscal year 2010 to expand services for veterans. VA's patient population includes aging veterans who need services such as long-term care-- including nursing home and noninstitutional care provided in veterans' homes or community-- and veterans returning from Afghanistan and Iraq. Each year, VA formulates its medical care budget, which involves developing estimates of spending for VA's health care services. VA is also responsible for budget execution-- spending appropriations and monitoring their use. GAO was asked to discuss challenges related to VA's health care services budget formulation and execution. This statement focuses on (1) challenges VA faces in formulating its health care budget, and (2) challenges VA faces in executing its health care budget. This testimony is based on three GAO reports: VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement (GAO-06-958) (Sept. 2006); VA Heath Care: Spending for Mental Health Strategic Plan Initiatives Was Substantially Less Than Planned (GAO-07-66) (Nov. 2006); and VA Health Care: Long-Term Care Strategic Planning and Budgeting Need Improvement (GAO-09-145) (Jan. 2009). VA faces challenges formulating its health care budget each fiscal year. As noted in GAO's 2006 report on VA's overall health care budget, these include making realistic assumptions about the budgetary impact of policy changes, making accurate calculations, and obtaining sufficient data for useful budget projections. For example, GAO found that VA made unrealistic assumptions about how quickly it would realize savings from proposed changes in nursing home policy. While VA took steps to respond to GAO's 2006 recommendations about VA budgeting, recent GAO work found similar issues. In 2009, GAO reported on VA's long-term care budget--namely, on challenges in projecting the amount and cost of VA long-term care. GAO found that in its fiscal year 2009 budget justification, VA used assumptions about the cost of nursing home and noninstitutional care that appeared unrealistically low given recent VA experience and other indicators. VA said it would complete an action plan responding to GAO's 2009 recommendations by the end of March 2009. VA also faces challenges executing its health care budget. These include spending and tracking funds for specific initiatives and providing timely and useful information to Congress on budget execution progress and problems. GAO's 2006 report on VA funding for new mental health initiatives found VA had difficulty spending and tracking funds for initiatives in VA's mental health strategic plan to expand services to address service gaps. The initiatives were to enhance VA's larger mental health program and were to be funded by $100 million in fiscal year 2005. Some VA medical centers did not spend all the funds they had received for the initiatives by the end of the fiscal year, partly due to the time it took to hire staff and renovate space for mental health programs. Also, VA did not track how funding allocated for the initiatives was spent. GAO's 2006 report on VA's overall health care budget found that VA monitored its health care budget execution and identified execution problems for fiscal years 2005 and 2006, but did not report the problems to Congress in a timely way. GAO also found that VA's reporting on budget execution to Congress could have been more informative. VA has not fully implemented one of GAO's two recommendations for improving VA budget execution. Sound budget formulation, monitoring of budget execution, and the reporting of informative and timely information to Congress for oversight continue to be essential as VA addresses budget challenges GAO has identified. Budgeting involves imperfect information and uncertainty, but VA has the opportunity to improve the credibility of its budgeting by continuing to address identified problems. This is particularly true for long-term care, where for several years GAO work has highlighted concerns about workload assumptions and cost projections. By improving its budget process, VA can increase the credibility and usefulness of information it provides to Congress on its budget plans and progress in spending funds. GAO's prior work on new mental health initiatives may provide a cautionary lesson about expanding VA programs--namely, that funding availability does not always mean that new initiatives will be fully implemented in a given fiscal year or that funds will be adequately tracked.
As of September 30, 1993, the District of Columbia’s three defined benefit pension plans for police officers and firefighters, teachers, and judges had a total of about 24,000 participants. During 1993, the District contributed a total of about $292.3 million to the plans and the federal government paid about $52.1 million. The Congress created the three plans over a number of years beginning early in this century. Under the plans’ enabling legislation, only the federal government paid into the plans and did so just for current annual retirement benefits (known as pay-as-you-go funding). The Congress did not authorize accumulating funds to meet the plans’ normal costs—the amount of funds needed each year that would be sufficient to pay all retirement benefits of active plan participants when due. Effective with Home Rule in January 1975, the responsibility for making the pay-as-you-go payments was transferred to the District government. Because the plans’ normal costs were not funded, the shortfall in funds needed to pay future retirement benefits—the plans’ unfunded liability—increased each year. The Congress partly addressed the plans’ unfunded liability with the District of Columbia Retirement Reform Act of 1979, which changed the District’s payments to the plans to a modified pay-as-you-go basis and authorized annual federal payments to the plans of about $52.1 million. Consequently, the contribution requirements in the reform act did not provide for amortizing (paying off over a number of years) the plans’ unfunded liability. In November 1992, we reported that the plans’ unfunded liability had grown to about $5 billion and that they were not as well funded as other public plans. Our report also noted that the District faced an increasing demand on revenues from the three plans. We reported that by the year 2005 its contributions could grow to about 15 percent of revenues ($640.2 million), compared with about 8 percent ($234.9 million) in 1991. Similarly, as shown in figure 1.1, without changes to the current law the District’s contributions to the three plans as a percentage of payroll will increase from 54 percent to a high of 71 percent in 2005, when federal contributions cease. Since our report, there has been much discussion about how to address these plans’ continued underfunding. H.R. 3728, in conjunction with D.C. Act 10-239, has been proposed as one means to do so. The House bill and the District’s act would eliminate the unfunded liability in the year 2035, mainly by increasing the obligations of the federal government, active plan participants, and retirees, and by placing the District’s contributions on an actuarial basis. (See chapter 3 for a full discussion of these provisions.) Concern for the plans’ underfunding was heightened by the District’s recent cash flow difficulties. These difficulties caused the District to defer its contributions to the funds for the second and third quarters of fiscal year 1994 until fiscal year 1995. This action led to a lawsuit by the District of Columbia Retirement Board (DCRB) that required the contributions to be made. We reported in June 1994 that the District is faced with both unresolved long-term financial issues and continued short-term financial crises, such as a significant and continuing decline in its cash position. Placing the plans’ funding on an actuarial basis and eliminating their unfunded liability would relieve the District of a significant financial burden. Such action would also help ensure that sufficient funds are available to pay future retirement benefits. To fully evaluate H.R. 3728, the Ranking Minority Member of the House Committee on the District of Columbia requested us to provide certain information related to the three plans and their unfunded liability. Specifically, he asked us to provide the history and current status of the plans’ unfunded pension liability and the number of plan participants before Home Rule, including a comparison of the plans’ unfunded liability with other state and local plans, and an analysis of the District’s funding formula under the proposed legislation and alternative federal funding methods. To develop the history of the plans’ unfunded liability, we reviewed the legislative history of the District of Columbia Retirement Reform Act of 1979, which established the pension funds for the three plans. We also reviewed the reports of commissions that had been established at various times by the Congress and the District government to evaluate the District’s fiscal activities, including reviews of the plans’ pension funds. In addition, we held discussions with and obtained information from District government and DCRB staff and officials, such as the number of plan participants before Home Rule. To compare the three plans’ unfunded liability with other state and local plans, we obtained survey data published in March 1993 by the Public Pension Coordinating Council. We used these data to update the comparison of the funding status of the three District plans with 24 comparable defined benefit state and local governmental pension plans in our November 1992 report. To analyze H.R. 3728 and the companion District act we used, in part, a study of the bill that was done for DCRB by Milliman & Robertson, Inc., its actuarial consultants. In addition, we reviewed the actuarial model developed by the firm and used it to determine the potential effects of alternate funding methods for eliminating the three plans’ unfunded liability. This model includes typical actuarial assumptions about rates of inflation, wage increases, and investment earnings. Our work was performed from January through October 1994 in accordance with generally accepted government auditing standards. We did not obtain agency comments on this report. However, we discussed the history and status of the three pension plans with District officials to ensure that the report’s descriptions were accurate and complete. When the Congress created the District’s plans for police officers and firefighters, teachers, and judges, it provided for funding them on a pay-as-you-go basis. Beginning in the mid-1970s, congressional committees considered various proposals to fund the plans on an actuarial basis and to eliminate their unfunded liability. In 1978, the Congress passed one proposal that, however, was vetoed because the federal funding obligation was deemed too high. In 1979, compromise legislation was enacted that provided for lower federal funding and modified pay-as-you-go payments for the District. Because this legislation did not provide for eliminating the plans’ unfunded liability, the liability had increased to $5 billion in 1993, with the plans continuing to be not as well funded as other comparable public plans. The Congress created defined benefit pension plans for District of Columbia police officers and firefighters, teachers, and judges at different times: police officers and firefighters in 1916; teachers in 1920; and judges in 1970. These plans were funded on a pay-as-you-go basis, which meant that they received only enough money to pay current annual retirement benefits but did not accumulate any funds with which to meet the constantly accruing future pension liabilities of their participants. In 1946, however, the funding of the teachers’ plan was changed to an actuarial basis so that the District’s contribution covered the normal cost of the plan as well as amortizing the accrued unfunded liability over a 20-year period. Subsequently in 1968, the District’s commissioners requested and were granted permission by the Congress to fund only the normal cost of the plan each year because of the need to use revenues for other purposes. This change was enacted in 1970 by Public Law 91-263, which put the fund on a modified pay-as-you-go basis, covering only the normal cost each year. This law also froze the fund at its June 20, 1969, balance of $61.8 million and mandated that it remain at that level or the amount of the employees’ equity, whichever was greater. Congressional concern with District operations led to the establishment of the Commission on the Organization of the Government of the District of Columbia (Nelsen Commission) in September 1970. The commission’s charter was to analyze the District government’s operations with the goal of promoting increased economy and efficiency. Accordingly, the scope of the commission’s review included the District’s pension plans for police officers and firefighters and teachers (the judges plan was not within its charter). The commission’s August 1972 report recommended the creation of a separate pension fund for police officers and firefighters that would invest moneys not required for current operations and have periodic Department of the Treasury actuarial valuations. In addition, the commission recommended actions to reverse the increase in the unfunded liabilities in the police officers’ and firefighters’ and teachers’ plans and to provide a means for financing any liberalization of their benefits that might be approved in the future. In May 1974, in response to the Nelsen Commission report, the Chairman of the House Subcommittee on Revenue and Financial Affairs, Committee on the District of Columbia, introduced H.R. 15139, intended to establish and finance a pension fund for police officers and firefighters. There was opposition from the Office of Management and Budget (OMB) and the bill died in Subcommittee. The Congress took no further action on the pension funding issue until March 1976, when legislation was considered by the House Subcommittee on Fiscal Affairs, Committee on the District of Columbia. An objective of the legislation was to establish an actuarially sound basis for financing retirement benefits in the plans for police officers and firefighters, teachers, and judges. H.R. 14960 was reported out by the full Committee in August 1976, but was not considered by the House because of opposition by OMB. On April 6, 1977, the House Subcommittee on Fiscal Affairs, Committee on the District of Columbia, reported out H.R. 2465. Subsequently, the bill was reported out of the Committee on April 26, 1977; introduced in the full House as H.R. 6536; and passed in September 1977. This legislation authorized a total federal contribution of about $769 million over 25 years, starting at about $48 million in 1978 and declining to $2 million in 2003, to help finance the liabilities for retirement benefits incurred before Home Rule. Later that year, in November 1977, the Senate considered S. 2316, which differed somewhat from H.R. 6536. Among other things, the Senate bill required annual federal payments of $80 million for 25 years and included tougher standards for disability benefits. The federal payments were intended to amortize the unfunded liability of about $1.05 billion for retirements that had occurred before Home Rule; this liability was deemed to be the federal share of the total unfunded liability of about $2.09 billion that had been incurred up to that time. The remaining balance of $1.04 billion, which was attributable to nonretirees, was deemed to be the District’s share of the total unfunded liability. (Subsequently, the Department of the Treasury calculated that the total unfunded liability was about $2.7 billion—see p. 19.) However, the formula in the Senate bill for computing the District’s annual contributions did not provide for amortizing the District’s share of the unfunded liability. While the Committee report on the bill recognized that actuarially based funding required the liability to be amortized, the report also stated that in the long run full funding of the District’s share was fiscally impossible given its strained financial circumstances and competing claims on revenues. However, the Committee believed that the District could afford to pay—for an initial interim 25-year period, as the federal share was being amortized—the lesser of (1) the net normal cost plus interest on its share of the unfunded liability and (2) the net pay-as-you-go cost plus an amount that, paid annually to 2003, would allow the District’s share of the unfunded liability to increase by no more than the rate of inflation. Thereafter, the District would pay the net normal cost plus interest on the unfunded liability. The Senate passed H.R. 6536, which had been amended to incorporate S. 2316. In October 1978, the House and Senate conference committee reported out H.R. 6536, which authorized a smaller federal contribution of $65 million annually over 25 years. In November 1978, then President Carter vetoed H.R. 6536. His veto message articulated two principal arguments: the federal contribution authorized by the Congress overstated the appropriate federal liability, largely because the existing liability was due to abuses of the disability retirement statutes before Home Rule; and the amount authorized ignored the continuing federal contribution for thousands of District employees covered by the federal Civil Service Retirement System (CSRS). The Carter administration stated that it was willing to assume 60 percent of the cost of moving the affected District plans to an actuarially sound system. Under this proposal, the federal government would have contributed $462 million over 25 years. However, the veto message noted that with H.R. 6536 the Congress supported a more costly funding method that obligated the federal government to pay about $1.6 billion over the same time period. Following the veto, the Congress addressed the pension plans’ funding issue again in 1979. The House and Senate agreed to S. 1037, which represented a compromise between the Senate’s provisions for fully amortizing the federal share and the House’s partial amortization provisions. The Senate bill provided for funds to cover the unfunded liability for all retirements—service and disability—before Home Rule; the House bill provided funds for 75 percent of the unfunded liability for service retirements and 33-1/3 percent of the unfunded liability for disability retirements before Home Rule. In November 1979, S. 1037, the District of Columbia Retirement Reform Act of 1979, was signed into law. The act notes that the retirement benefits—which Congress had authorized for the police officers, firefighters, teachers, and judges of the District of Columbia—had not been financed on an actuarially sound basis. Neither federal payments to the District nor District payments for pensions had taken into account the long-term financial requirements of these retirement plans. Consequently, the act established for the first time separate retirement funds for (1) police officers and firefighters, (2) teachers, and (3) judges. The act also established a retirement board to manage the funds, required that the funds be managed on an actuarially sound basis, and provided federal contributions to these funds to partially finance the liability for retirement benefits incurred before January 2, 1975, the effective date of Home Rule. At that time, the three plans had a total of 14,095 active participants and 7,657 retirees (see table 2.1). The act committed the federal government to pay $52.07 million annually beginning in fiscal year 1980 and continuing through 2004. This amount represented a compromise between the Congress and the administration in defining the appropriate federal share of the plans’ unfunded liability. Under the act, the federal share was 80 percent of the service retirement unfunded liability and 33-1/3 percent of the disability retirement unfunded liability, as of October 1, 1979, for District employees who had retired as of January 2, 1975, the effective date of Home Rule. The present value of the total federal government obligation for the 25-year period was then $646 million, an amount anticipated to be sufficient to pay off the revised federal share of the unfunded liability by the year 2005. The 1979 reform act’s provisions reflected the earlier congressional beliefs that (1) in the long term the District’s financial condition would not enable it to pay off its share of the unfunded liability and (2) in the near future the District should not be burdened with having to pay the net normal costplus interest on its share of the unfunded liability. Therefore, an alternate method was adopted for the 25 years before 2005, providing for substantially lower contributions. Accordingly, the annual District contribution to the pension funds, as determined by DCRB based upon a formula in the act, consists of the sum of three items: The lesser of (1) the net pay-as-you-go cost or (2) the net normal cost plus interest on the unfunded actuarial liability. An amount necessary to amortize over 10 years the difference of (1) the actuarially projected unfunded liability in the year 2004 if no such amortization payments were made and (2) the actuarially projected liability in the year 2004 if the 1979 unfunded liability grew by the anticipated rate of inflation during the interim. However, any additional amount required under this provision may not exceed 10 percent of the net pay-as-you-go cost for the police officers’ and firefighters’ plan or 30 percent for the teachers’ or judges’ plans. An amount necessary to amortize over 25 years any liability due to plan changes. After the federal contribution ceases, the reform act provides that beginning with fiscal year 2005 the District’s contribution to the three funds will be an amount equal to their net normal cost plus interest on their unfunded liability. On the effective date of the reform act in November 1979, the District’s share of the unfunded liability was about $2 billion, based on Department of the Treasury calculations: Present value of total unfunded liability: $2,676,200,000; less present value of future federal payments: $646,400,000; equals present value of the District’s unfunded liability: $2,029,800,000. In 1989, the District’s concern with its financial condition resulted in the Mayor appointing an independent commission charged with developing a fiscal strategy for fiscal years 1992-96. As part of its charter, the commission reviewed the pension funds for police officers and firefighters, teachers, and judges. The commission’s 1990 report noted that while the reform act’s funding formula did not permit unfunded liabilities to accrue, it did permit the existing liability to grow. The report also pointed out that, under the District’s funding formula, in 2005 the unfunded liability would be $8 billion and that the District’s required contribution would be $795 million—about 85 percent of the payroll for the three plans. Accordingly, the commission made the following recommendations: Adoption of a funding policy that would include annual funding of the normal cost, amortization of the unfunded liability as a level percentage of payroll over 45 years, and an increase in the investment return assumption from 7 to 8 percent per year. Continuation of the annual federal contributions of $52.07 million per year for 49 (instead of 14) more years, with an annual 5-percent increase in the amount of the payment—the assumed rate of inflation used in determining pension costs. Reduction of 1 percent in the automatic cost-of-living increases for retirees. Our November 1992 report echoed the commission’s observations about the unfunded liability for the three plans. We stated that the effect of the reform act was to allow the initial $2.0 billion unfunded liability to increase to about $4.9 billion in 1993, due mostly to interest accruing on it. Our report noted that because the reform act specified limitations on the level of amortization contributions the District could make, no amortization of the unfunded liability was possible. We also pointed out that in 2005 the District’s annual contribution could represent about 15 percent of its revenues, compared with about 8 percent in 1991, and that the unfunded liability, which could be as high as $7.7 billion, would remain constant beginning in that year. The effect of the funding formula in the 1979 reform act has been to limit the funded status of the three plans. In our November 1992 report, we pointed out that the three District plans were not funded as completely as other comparable state and local governmental plans. In updating our data we found that this continues to be the case for the 24 plans. Of the three District plans, the police officers’ and firefighters’ plan has the lowest funding level compared with all the other plans, while the plans for teachers and judges are a little better funded but still at lower levels than comparable plans. Figures 2.1 through 2.3 compare the funded status of the three District plans with the same public plans that were included in our earlier report. (See app. III for a complete list of the plans.) In congressional deliberations leading up to the 1979 reform act, the appropriate federal responsibility for the three plans’ unfunded liability as of the effective date of Home Rule was considered to be the portion that represented all retirees. However, to ensure presidential approval of the reform act, the Congress agreed to fund less than the full amount of these retirees’ share: 80 percent of service retirements and 33-1/3 percent of disability retirements. It was anticipated that the authorized annual federal payments of $52.07 million would amortize this share by the year 2005. Congressional deliberators recognized the need to amortize the District’s share of the plans’ unfunded liability as of the effective date of Home Rule. However, they believed that the District’s financial resources (1) would never enable its share to the amortized, (2) would eventually enable it to pay the annual net normal cost and interest on its share, and (3) should not be overly burdened with paying the latter amounts during the 25-year period in which the federal share was being amortized. Accordingly, the formula for calculating the District’s annual contribution was devised to limit its payments to amounts that essentially allow its share of the unfunded liability to increase with the rate of inflation to the year 2005 and to remain constant after that time. In that year, the unfunded liability could be about $6.1 billion and the District’s contribution could be about 15 percent of its revenues, compared with about 8 percent in 1993—a significant financial burden. The effect of the reform act’s funding formula has been to limit the three plans’ funded status compared with other public plans. Given the District’s current financial condition, the congressional concerns about the District’s financial capability appear to have been appropriate. Unless the District’s financial condition improves significantly, the District will not likely be able to eliminate the plans’ unfunded liability without federal financial assistance. The District government deliberated the issue of the three plans’ unfunded liability and enacted legislation to eliminate it. The District’s act, however, will not take effect until companion federal legislation is enacted. Without such a federal law, the plans’ unfunded liability will continue to grow and the District’s annual contributions will consume an increasing portion of its revenues. The three plans’ unfunded liability would be eliminated under proposed companion legislation that was introduced in the District of Columbia Council in December 1993 (Council Bill 10-515) and in the House of Representatives in January 1994 (H.R. 3728). Both bills contained the same provisions, except for District contribution requirements that were only in the Council’s bill. Both bills included provisions for increasing the federal government’s and employees’ obligations and placing the District’s contributions on an actuarial basis. The District’s bill was enacted into law on May 4, 1994, as D.C. Act 10-239, the Full Funding of Pension Liability Retirement Reform Amendment Act of 1994, but it will not take effect until H.R. 3728 or comparable companion federal legislation is enacted. Thus, the House bill is a companion to the District’s law and should be considered in conjunction with it. A study of H.R. 3728 conducted by an actuarial consulting firm for DCRB concluded that it would effectively eliminate the unfunded liability for the three plans in the year 2035. This would be accomplished through placing additional obligations on the federal government and active and retired employees and putting the District’s contributions on an actuarial basis, while also mandating a minimum annual District payment. The basic approach is to stabilize the District’s contributions at 45 percent of payroll through year 2035, as shown in figure 3.1. At 45 percent of payroll, the annual contributions would range from $403.5 million in year 2005 to $1.7 billion in year 2035. Maintaining pension contributions as a level percentage of payroll is the most common funding method used by public sector pension plans. The federal contribution to the plans would significantly increase under H.R. 3728. Under current law, the annual federal payments of $52.1 million, which have a present value of about $392 million, cease as of 2005. The bill proposes increasing the federal payment by 5 percent each year, beginning with fiscal year 1996, and extending it for 30 additional years, from 2005 through 2035. The federal payment would increase substantially in the latter part of the 40-year period, rising to about $367 million in the 40th year (see fig. 3.2). Overall, the present value of the total federal obligation would be increased by about $1.1 billion. (See app. I.) The obligations of the plans’ active participants would increase and the retirees’ benefits would decrease. All three plans’ active participants would be required to contribute an additional 1 percent of pay: police officers’, firefighters’, and teachers’ contributions would rise from 7 to 8 percent, and judges’ would go from 3.5 to 4.5 percent. In addition, retirees’ cost-of-living adjustments would be reduced from two to one each year. Also, police officers and firefighters who retired before February 15, 1980, would receive cost-of-living adjustments based on the consumer price index rather than on the active participants’ pay raises. Finally, H.R. 3728 requires several changes in the District’s responsibilities. In particular, the formula for determining the District’s payment would be changed to one that is actuarially based; this approach adjusts the District’s contributions to a level percentage of payroll and is most commonly used by public sector plans. Under this formula, the District’s contribution would be stable as a level percentage of payroll and consist of several components: (1) the plans’ net normal cost; (2) the amortization of their unfunded liability as of October 1, 1995, over 40 years; and (3) the amortization of actuarial gains and losses as well as benefit increases over 15 and 25 years, respectively. However, the bill specifically provides that the District’s annual contribution must be at least $295.5 million, the amount of its certified contribution for 1995. Using this approach, the District’s contributions would be slightly lower than current costs in the first few years, then increase in step with payroll. The percentage of payroll contribution for these groups will gradually fall from the current 53.8 percent to 44.8 percent after 2005. The District’s contributions from 1996 through 2020 would be less than the current law requires and would be greater thereafter (see fig. 3.3). The present value of the District’s contributions under current law through 2035 is about $8.2 billion and decreases to about $7.0 billion under the bill. We note that under the District’s act, its contributions would be at the mandatory $295.5 million minimum for fiscal years 1996 through 1998. Our analysis shows that this provision results in the District paying a total of about $58 million more in annual dollars than would be required actuarially. As table 3.1 shows, for example, the 1996 payment is $32.7 million more than the actuarially determined amount. The plans’ current funding method and their unfunded liability represent a significant and increasing financial burden to the District. For this reason, we support timely action on eliminating the unfunded liability and placing the plans’ funding on a sound actuarial basis. H.R. 3728 sets forth one approach that would resolve these matters along the foregoing general lines. However, we are concerned with the proposed federal funding method because the annual 5-percent increase is inequitable to future generations of taxpayers—particularly in the latter part of the 40-year period—because it requires them to help eliminate a greater share of a liability incurred by much earlier generations. A more equitable federal funding method, which shifts less of the burden to the future, would be a constant annual payment, as under current law (see further discussion in ch. 4). We also note that the District’s payments would be about $58 million higher for the first 3 years under D.C. Act 10-239 than actuarially required. Under H.R. 3728, federal contributions to the three District plans would increase by 5 percent annually, going from $54.7 million in 1996 to $366.6 million in 2035. In lieu of these increasing payments, we analyzed the effect of various constant annual federal payments. Our analysis shows that total federal obligations would be less than under H.R. 3728 with level annual payments ranging from $52.1 million to $92.1 million. The federal obligation would be about one-half of the amount under the bill if the current annual payment of $52.1 million is continued through the year 2035. Somewhat smaller federal savings would be attained under the bill with higher constant annual payments up to $92.1 million but in these circumstances the District’s overall burden would be increased. However, an annual federal payment of $102.1 million would have about the same effect on District contributions as the bill. In lieu of the incremental federal payments proposed by H.R. 3728, we analyzed the effect of constant federal payments of various amounts through the year 2035. Our analysis shows that the greatest federal savings—about one-half of the amount that would be paid under H.R. 3728—would be realized by extending the current federal payment of $52.1 million. (These data are summarized in table 4.1 below and detailed in app. II.) This change would also increase the District’s contributions by about 10 percentage points in today’s value (present value). Somewhat smaller federal savings under the bill would be obtained with annual payments of $72.1 million and $92.1 million. In terms of the District’s contributions as a percentage of payroll, the changes are less dramatic (see table 4.2). The effect of a constant federal payment of $52.1 million would be to increase the District’s contributions as a percentage of payroll by about 5 percentage points from the 45 percent under H.R. 3728. Given the District’s current fiscal situation, however, a question arises as to the amounts that the District could realistically be expected to contribute in future years. For example, the 5-percentage point increase in the District’s percentage of payroll in 2005, with a constant $52.1 million federal payment, equates to an additional District contribution of about $45 million that year, for a total of $448.2 million. In contrast, the comparable increases under a constant federal payment of $92.1 million amount to a much more modest $13.1 million. (See app. II.) However, an annual federal payment of $102.1 million—present value of about $1.46 billion—would also stabilize the District’s contributions at about 45 percent of payroll. H.R. 3728 proposes a substantial increase in the federal obligation to the three District pension plans to help eliminate their unfunded liability by extending and escalating the current annual federal payment of $52.1 million to year 2036. This approach, however, inequitably burdens future taxpayers by requiring them to help eliminate a greater share of a liability incurred by much earlier generations. Instead, the unfunded liability could be eliminated with annual federal payments of a constant amount. Constant annual federal payments of about $102.1 million through 2035 would achieve the same results as the bill in terms of stabilizing the District’s contributions at about 45 percent of payroll from the year 2005 through 2035. Also, such payments would cost the federal government $40 million less overall than the total federal payments under H.R. 3728. If the Congress wishes to change the law to increase federal payments to the three District pension plans, it should consider authorizing a constant annual payment rather than the escalating payments provided for in H.R. 3728. A constant annual approach would be more equitable because it would avoid shifting to future taxpayers a disproportionate share of the burden of financing the three plans. In addition, if the Congress concludes that the federal share should be increased in total by the amount authorized in H.R. 3728—calculated at about $1.1 billion in value today—the appropriate constant annual federal payment would be $102.1 million.
Pursuant to a congressional request, GAO reviewed the District of Columbia's pension plans for certain employees, focusing on: (1) the history and status of the plans' unfunded pension liability and the number of plan participants prior to Home Rule; (2) a comparison of the plans' unfunded liability with other state and local plans; and (3) the District's funding formula under the proposed legislation and alternative federal funding methods. GAO found that: (1) the District's pension plans were originally funded on a pay-as-you-go basis with no accrual of monies for future liabilities; (2) even when the plans were put on an actuarial basis, the District's contributions to the pension funds were lower than needed to eliminate the unfunded liabilities; (3) the District's pension plans are not as well funded as some comparable state and local plans; (4) the proposed legislation would eliminate the plans' unfunded liability in 2035 by increasing federal payments, decreasing benefits and cost-of-living adjustments, and placing the District's contributions on an actuarial basis; (5) under the proposed funding method, the federal government would assume about $1 billion of the District's obligations, most of which would be paid in future budget years because of the 5-percent annual increase in the federal payments; (6) the District's contributions for the first 3 years would be at the required minimum and would be higher than the actuarially determined amounts; (7) a constant federal payment of about $102.1 million would shift less of the contribution burden to future federal budgets and taxpayers and would help eliminate the unfunded liability; and (8) options to lower annual federal payments would eliminate the unfunded liability but increase the District's contributions.
Since the Civilian Health and Medical Program of the Uniformed Services (CHAMPUS) began in 1956 and was expanded in 1966, it functioned much like a fee-for-service insurance program. Beneficiaries have been free to select providers and required to pay deductibles and copayments, but, unlike with most insurance programs, they have not been required to pay premiums. CHAMPUS has approximately 5.7 million beneficiaries and, as part of a larger Military Health Services System (MHSS), these beneficiaries are also eligible for care in the MHSS’ 127 hospitals and 500 clinics worldwide. Of the approximately $15.2 billion budgeted for the MHSS in fiscal year 1995, the CHAMPUS share is about $3.6 billion or about 24 percent. Because of escalating costs, claims paperwork demands, and general beneficiary dissatisfaction, DOD initiated, with congressional authority, a series of demonstration projects in the late 1980s designed to more effectively contain costs and improve services to beneficiaries. One of these projects, the CHAMPUS Reform Initiative (CRI), a forerunner of TRICARE managed care support contracts, was one of the first to introduce managed care features to CHAMPUS. Included as part of a triple-option health benefit were a health maintenance organization choice, a preferred provider choice, and the existing standard CHAMPUS choice. Managed care features introduced included enrollment, utilization management, assistance in referral to the most cost-effective providers, and reduced paperwork. The first CRI contract, awarded to Foundation Health Corporation,covered California and Hawaii. Foundation delivered services under this contract between August 1988 and January 1994. Before the contract expired, DOD began a new competitively bid procurement for California and Hawaii that resulted in DOD’s awarding a 5-1/2 year (1 half-year plus 5 option years), $3.5 billion contract to Aetna Government Health Plans, Inc. in July 1993. Because a bid protest was sustained on this procurement, it was recompeted, although Aetna’s contract was allowed to proceed until a new one was awarded. In late 1993, in response to requirements in DOD’s Appropriation Act for Fiscal Year 1994, the Department announced plans for restructuring the entire MHSS program, including CHAMPUS. The restructured program, known as TRICARE, is to be completely implemented by May 1997. To implement and administer TRICARE, DOD reorganized the military delivery system into 12 new, joint-Service regions. DOD also created a new administrative organization featuring lead agents in each region to coordinate among the three Services and monitor health care delivery. For medical care the military medical facilities cannot provide, seven managed care support contracts will be awarded to civilian health care companies covering DOD’s 12 health care regions. These contracts, much like the former CRI contracts, retain the fixed-price, at-risk, and triple-option health benefit that CRI featured. An important difference, however, is the addition of lead agent requirements—tasks to be performed by the contractor specific to military medical facilities in the region. Figure 1 shows the regions covered by the seven contracts. Since the December 1993 decision sustaining the protest of the California/Hawaii (regions 9, 10, and 12) contract award, three managed care support contracts have been awarded and all have been protested. Also, a protest was filed on the solicitations for the California/Hawaii recompetition and that for Washington/Oregon (region 11). GAO has denied the protests on these solicitations and the Washington/Oregon contract award and has yet to decide on the other two award protests. Information on the procurements awarded to date appears in table 1. For more information on the transition to managed care support contracts and the offerors submitting proposals for these contracts, see appendixes II and III, respectively. The Office of CHAMPUS, an organization within the Office of the Assistant Secretary of Defense (Health Affairs), administers the procurements. The procurement process involves the issuance of a request for proposal (RFP) that has the detailed specifications and instructions offerors are to follow in responding. Offerors are required to submit both a technical and a business (price) proposal. Upon receipt of the offerors’ proposals, a Source Selection Evaluation Board (SSEB) evaluates the technical proposals according to detailed evaluation criteria, and a Business Proposal Evaluation Team (BPET) evaluates the proposed prices. A Source Selection Advisory Council (SSAC) reviews the work of the two boards and consults with them. Following discussions with offerors about weaknesses and deficiencies in their proposals, DOD requests offerors to submit “best and final offers.” The two boards again evaluate changes to proposals, complete final scoring, and prepare reports on the evaluations. A senior executive designated as the source selection authority uses these reports in selecting the winning offeror. As part of the evaluation process, evaluators are asked to identify ways to improve the process. For a complete description of the procurement process and the tasks performed, see appendix IV. GAO sustained the protest of the July 1993 California/Hawaii award primarily because DOD failed to evaluate offerors’ proposals according to the RFP criteria. The RFP provided that each offeror’s proposed approach to attaining health care cost estimates would be individually evaluated. However, in evaluating the proposals, DOD evaluators rejected the contractors’ cost estimates and assigned the same government cost estimates to all offerors’ proposals. By so doing, the BPET did not consider offerors’ individual cost-containment approaches, such as their utilization management approaches, upon which the success of managed care contracting to contain costs largely rests. In effect, the evaluators’ action made this part of the evaluation methodology meaningless. Also, the process did not allow the price evaluators to discuss with the technical evaluators possible inconsistencies between the price and technical proposals nor otherwise discuss the technical information that supported the price estimates. Such discussions may have highlighted the need to analyze offerors’ individual cost containment approaches. During the protest of the Washington/Oregon award, the offeror protested nearly a dozen of DOD’s technical ratings of its proposal. In its decision, GAO recognized that DOD made mathematical errors that affected scoring, but these errors were not limited to the protesting offeror, and correcting them did not affect the procurement’s final outcome. DOD has made several changes that should improve future procurements. Major changes due to the protest experiences include (1) revising the price evaluation methodology and providing offerors more complete RFP information on how the methodology will be used in evaluating bid prices, (2) adding requirements for discussions between price and technical evaluation boards, and (3) revising both the requirements and the technical evaluation criteria for utilization management. Also, DOD is developing a computer spreadsheet to automate the technical scoring process and, thus, address mistakes made during the Washington/Oregon evaluation process. DOD’s other changes include providing more training for proposal evaluators, colocating the technical evaluation boards, and providing more feedback to offerors on their proposals’ weaknesses. A final change requires that DOD approve the bid price evaluation methodology before evaluating prices. DOD significantly changed its methodology for evaluating the health care cost portion of the offerors’ business proposals. While details of the new methodology are procurement sensitive and cannot be disclosed, the changes essentially involve evaluating the reasonableness of the offerors’ estimates for cost factors over which the contractor has some control, such as utilization management and provider discounts. The evaluation includes comparing the offerors’ cost estimates with the government’s estimates and considering the offerors’ justification and documentation. Also, DOD rewrote portions of the RFP to provide more explicit information to offerors so they can better understand the new evaluation methodology and the factors to be considered in evaluating prices. This more complete guidance should facilitate offerors’ ability to furnish the information DOD needs to evaluate their proposals. DOD instituted a process requiring discussions between the technical and the price evaluators. Previously, discussions between the two boards were prohibited, and knowledge possessed about offerors’ proposals by one group was not shared with the other. Under the new procedures, the SSEB briefs the BPET and responds to BPET questions on offerors’ proposed technical approaches. This should enable the BPET to better judge whether offerors can achieve the health care costs that they have bid. Conversely, the SSEB can request information from the BPET to assist in its technical evaluation. DOD significantly revised its RFP utilization management requirements and utilization management criteria used in evaluating offerors’ proposals. DOD incorporated these revisions in the solicitations for the then ongoing Washington/Oregon procurement as well as the post-protest recompetition of the California/Hawaii procurement. The revised utilization management requirements place additional responsibilities on the contractor and establish specific utilization management procedures. Also, while the previous evaluation criteria basically involved checking whether offerors’ proposed approaches addressed requirements, the revised criteria require evaluators to judge the effectiveness of the cost-containment approaches. Among other DOD improvements is the provision of more training for evaluators and team leaders who oversee the evaluation of specific tasks. Training for the California/Hawaii evaluators had been limited to about one-half day, but training on more recent procurements has been increased to nearly 1 week. The new training includes more detailed information on the (1) procurement cycle, (2) technical and price evaluation boards, (3) evaluation of proposals, and (4) use of personal computers to record evaluation information. Another change involves colocating at Aurora, Colorado, the SSEB staff who had been split between Aurora and Rosslyn, Virginia. SSEB members evaluating managed care tasks were located in Rosslyn, and those evaluating claims processing and related tasks were in Aurora. The dual locations caused the board chair to travel frequently to the Rosslyn location to review work and provide guidance to board members there. Also, DOD lost time awaiting information arriving from the Rosslyn site to Aurora and retyping and reformatting information submitted from the Rosslyn site. More significantly, some rating procedures differed between the two locations. A further change in the process is that DOD, along with providing offerors the questions evaluators raise on their proposals, is also providing information on proposal weaknesses. As a result, offerors are better assured that they are addressing the specific concerns that prompted the questions. Offerors told us, moreover, that DOD is now providing them more information about their proposals, responding more quickly to their questions, and providing more complete information after initial evaluations and debriefings following contract award. A final procedural change is that DOD now formally approves the price evaluation methodology prepared by a contractor before the proposal evaluation begins. On the California/Hawaii procurement awarded to Aetna, DOD had not approved the evaluation methodology before the proposals had been evaluated. The methodology had been prepared by a consultant who submitted it to DOD for review, received no formal response, and proceeded to use it to evaluate proposals. Late in this process, DOD determined that the methodology improperly skewed the evaluation and ordered it changed at that time. DOD’s new procedure eliminates the possibility of changing the evaluation methodology during the process, thus removing any such possible appearance of impropriety. Despite DOD’s process improvements, several matters remain that concern both those administering and those responding to the procurements. First, unless DOD can avoid further delays in this round of procurements, it may not meet the congressional deadline for awarding all contracts by September 30, 1996. Also, the substantial expense that offerors incur to participate may further limit future competition. Also, the specificity of solicitation requirements may work against offerors proposing innovative, cost-saving managed care techniques. Further, by reducing the length of transition periods, DOD has introduced significant risk that all the tasks needed to deliver health care will not be completed on time. Finally, DOD needs to better ensure that prospective evaluators are properly qualified. For each of the four contracts awarded thus far, the procurement lengths, on average, have been 18 months or more than twice as long as originally planned. Figure 2 compares the planned and actual procurement times for the contracts. If the remaining procurements encounter similar delays, DOD will have difficulty in meeting the congressional mandate for awarding all contracts by September 30, 1996. The current schedule allows about 1 month of slippage for the remaining procurements to have all contracts awarded on time. A primary cause of delays has been the many changes DOD has made to solicitation requirements. For example, as shown in figure 3, the California/Hawaii (regions 9, 10, and 12) recompetition procurement had 22 RFP amendments, and the Washington/Oregon (region 11) procurement had 15 amendments. Some of the changes resulted from such new requirements as the lead agent concept and a new uniform benefits package to replace previous beneficiary cost-sharing requirements that differed across the country. Other changes resulted from major revisions to such existing requirements as utilization management. When such changes occur, extra time is needed to issue solicitation amendments, for offerors to analyze the changes and revise their proposals, and often for evaluation boards to review the changes. Offerors have expressed extreme displeasure about the continually changing program requirements that make it more costly for them to participate in the protracted procurements. On the other hand, procurements have been delayed to allow offerors to correct errors in their cost proposals and as a result of bid protests. While these actions have not caused major delays so far, because DOD normally can proceed with the procurements, protests can add additional time to the overall schedule. DOD has acted to shorten the procurement process by increasing the size of evaluation boards and changing the way proposals are evaluated. The enlarged boards can divide evaluation tasks among more members, and members have narrower spans of review responsibility. Regarding RFP changes, some offerors maintain that DOD did not adequately plan the program before beginning the procurements. While DOD officials acknowledge planning problems, particularly for the lead agent concept, they told us that RFP changes will become less of a problem as their experience with the managed care support contracts grows. Also, DOD officials are concerned that if needed changes are not added before contract award, it will be more costly to implement them after award in the form of contract change orders when competition no longer exists. Currently, the administration is strongly encouraging simplifying federal procurements by, among other things, adopting commercial best practices to reduce costs and expedite service delivery. DOD recognizes that its process is extremely costly, complex, and cumbersome for all affected and acknowledges the need to simplify and shorten it. DOD can take advantage of the administration initiative’s momentum and seek ways to simplify and streamline its health care procurements by considering, among other things, the private sector’s best practices. Because the procurements are broad, complex, lengthy, and involve huge sums of money, offerors incur substantial expense to participate. As a result, participation thus far has been limited to large companies with vast resources. For example, the California/Hawaii procurement required that offerors be in a position to risk losing a minimum of $65 million should they incur losses during the contract’s performance. Competition is further limited because only a small number of available subcontracting firms can now knowledgeably process CHAMPUS claims. Moreover, several offerors told us that it cost them between $1 and $3 million to develop their proposals. Planning and preparing bid proposals and responding to amendments require them to divert their most able people from their regular duties to work months preparing offers. One offeror, in illustrating the procurement’s size, complexity, and resources needed to participate, told us that its proposal consisted of 33,000 pages. The offeror told us that if it did not win a then ongoing procurement, it would not participate again unless it could develop a proposal for no more than $100,000. Another offeror said its firm could not afford to continue bidding if it did not win a contract soon. DOD incurs substantial costs as well. The evaluation process, in particular, requires tremendous time, effort, and costs. A DOD official estimated that 54,000 hours were spent on evaluating a recent procurement. In addition to evaluation duties, many staff must continue to perform their regular duties. Many commonly spend weekends performing evaluation duties involving a considerable amount of overtime expense. Further, many of the evaluators travel from all over the country and are on travel status for 5 to 6 weeks. DOD recognizes that in the next round of the seven regional procurements, the number of offerors may further narrow and consist only of those who won awards in the first round. While DOD has chosen to award large contracts on a regional basis, it may be advisable in the next round to consider such alternatives as awarding smaller contracts covering smaller geographic areas, awarding to more than one offeror in a region, or simplifying the contracts by removing the claims processing function and awarding it separately. DOD’s RFP requirements are extremely specific and prescriptive because, the Department has stated, it desires a uniform program nationwide in which beneficiaries and providers are subject to the same requirements and processes regardless of residence. Offerors, on the other hand, maintain that if DOD’s RFP stated minimum requirements but emphasized the health care outcomes desired and allowed offerors more flexibility in devising approaches to achieve such outcomes, costs could be reduced without adversely affecting the quality of care delivered. In specifying its requirements, DOD has sought to ensure that beneficiaries not be denied necessary care and that care be provided by appropriate medical personnel in the appropriate setting. DOD’s concern has been that allowing contractors to use different processes and criteria might jeopardize these ends. Offerors maintain that those objectives can be met by allowing them more freedom to use innovative approaches, drawing on their private-sector managed care expertise. In comparing DOD’s managed care procurements with private-sector procurements, private corporations interested in contracting for managed care have far less specific requirements and normally only request general information about offerors such as corporate background, financial capability, health care performance, and utilization management/quality assurance strategies. Offerors told us that DOD does not ask for the kind of information on private-sector experience that would allow them to adequately compare performance among offerors. Also, many corporations use managed care consulting firms to help identify their requirements and select awardees. Offerors often cite utilization management as the area in which more relaxed DOD requirements would enable them to implement equally or more effective techniques than DOD requires but with greater cost savings. Among the most objectionable requirements is the use of a two-level review process for determining care appropriateness/necessity, a specific company’s utilization management criteria, and reviewers with the same specialty as the providing physician. DOD has maintained that its utilization management requirements are based on its extensive review of the literature and are reasonable, though perhaps not the most cost-effective. Also, DOD has maintained that because the military environment differs from the private sector, it warrants different requirements. Nevertheless, DOD has acknowledged that offerors have some legitimate concerns. In recent discussions, DOD told us that, while it has no plans yet, for the next round of procurements it may begin considering ways of making the requirements less onerous to offerors while ensuring that beneficiaries receive adequate access to care. DOD officials said that they may begin seeking to simplify the requirements by making them less process and more outcome driven, while respecting, to the extent practicable, their overall system goals. Because of procurement delays occurring before contract award, DOD has tried to recover lost time by reducing to 6 months its scheduled 8- to 9-month transition period during which contractors prepare to deliver health care. But by doing so, DOD has introduced significant risk that contractors will not complete the many tasks needed to begin health care delivery on time. We have reported that DOD has experienced serious problems in the past both with fiscal intermediary contractors and the CRI contractor being unable to begin processing claims by the start work date because the 6-month transition period was too short. As a result, beneficiaries faced considerable difficulties getting services and providers getting reimbursement. The managed care transitions are more complex and involved than the prior transitions. Most offerors we contacted told us that 6 months was too short and that about 8 months was needed to accomplish the tasks required to be ready on time. The transition tasks include signing up network providers, establishing service centers, hiring health care finders, preparing information brochures, bringing the claims processing system on line, resolving database problems, enrolling beneficiaries, and many other tasks. Offerors also told us that even a contractor with CRI experience would have difficulty meeting the 6-month transition requirement. DOD contracting officials and evaluators also have expressed the same concerns. While DOD, in reducing the transition periods, is driven to adhere to its individual procurement schedules and thus respond to internal and external pressures to bring services on line, we believe the risk introduced far outweighs the small potential time savings due to shorter transition periods. As demonstrated in the fiscal intermediary and CRI transitions, inadequate transition periods can overly tax contractors to the point of failure and result in substantial additional time and expense to recover. DOD has so far selected evaluation board members in a relatively informal way, either allowing board chairs to do so, on the basis of their knowledge of the individuals, or military services headquarters or lead agents to do so, on the basis of general guidelines. To date, DOD, relying on this less formal appointee approach, has not set forth general qualification requirements for evaluators such as experience or subject area knowledge. But, because the tasks they evaluate are so specialized and because the boards have expanded and members are increasingly less familiar to selecting officials, specifying evaluator qualifications—as has been suggested by offerors and board members alike—seems prudent. Some offerors expressed concern to us that DOD evaluators have had little or no experience with private-sector managed care plans and thus have difficulty distinguishing among offerors who can perform effectively in the private sector and those who are less effective in ensuring quality care and controlling costs. Evaluation board team leaders for recent procurements told us that qualification requirements would be helpful to ensure that people with appropriate experience and knowledge can adequately evaluate specific tasks. One board member, as input to DOD’s internal improvement process, stated that some SSEB members seemed to lack (by their own admission) the prerequisite experience and background to serve most effectively as subject matter experts on the SSEB. He went on to state that, given the potential impact of these contracts in dollars and health care service, it seems critical that only experienced evaluators be put in a position to make the essential judgment calls inherent in the technical review process. On more recent procurements, DOD has requested that evaluator nominees submit resumes to assist selection decisions and facilitate their assignment to various tasks. While this is a step in the right direction, it does not ensure that prospective evaluators with appropriate skills are nominated in the first place and are selected on the basis of the requisite qualifications. DOD has improved the procurement process since the protest on the California/Hawaii award to Aetna was sustained, to the extent that offerors can be more assured of equitable and fair treatment. While the dollar value of the contracts will likely cause offerors to protest in the future, DOD improvements have reduced the chance of protests being sustained. Despite improvements in the process, several areas of concern remain, particularly regarding the next round of procurements. The procurement process is extremely costly, complex, and cumbersome for all affected, and DOD acknowledges the need to simplify it. We agree and see an opportunity for DOD to draw upon the administration’s current initiative for simplifying federal procurements as it seeks ways to streamline its processes. Further, because of the costs of participating, the number of offerors in the next procurement round may be limited to only those who received contracts in the first round. We think that DOD should consider alternative procurement approaches to help preserve the competitiveness of the process. Along with these measures, DOD needs to address whether its solicitation requirements can be less prescriptive and still achieve their overall health care goals. Though DOD was driven by internal and external pressures to bring health care services on line, we do not agree with the Department’s decision to reduce transition times to make up for time lost in awarding the contracts. The potential time saved by shortening transition periods, in our view, does not justify the risk of contractors not being able to prepare to deliver services on time. Finally, given the increasing size of the evaluation boards, their specialized tasks, and members’ increasing lack of familiarity to selecting officials, we believe that DOD needs to develop qualification requirements for evaluator appointees. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Health Affairs) to weigh, in view of the potential effects of such large procurements on competition, alternative award approaches for the next procurement round; determine whether and, if so, how the next round’s solicitation requirements could be simplified, incorporating the use of potentially better, more economical, best-practice managed care techniques while preserving the system’s overall health care goals; adhere to the 8- to 9- month scheduled transition period and discontinue, whenever possible, reducing such periods to make up for delays incurred before contracts are awarded; and establish general qualification requirements for evaluator appointees. In commenting on the draft report, DOD fully agreed with the first three of our recommendations and agreed in part that qualifications for evaluation board appointees need to be established. DOD pointed out that, while it could improve the evaluator selection process, it now tasks lead agents and the Services with nominating qualified individuals and the contracting officer and board chairs with reviewing their resumes. We continue to believe that establishing general qualification requirements would more appropriately equip responsible DOD officials to nominate and select the best qualified evaluators and assign them the most suitable tasks. DOD made other comments and suggested changes that we incorporated in the report as appropriate. DOD’s comments are included as appendix V. As arranged with your staff or offices, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after its issue date. At that time, we will send copies to the Secretary of Defense; the Director, Office of Management and Budget; and interested congressional committees. We will also make copies available to others upon request. If you have any questions concerning the contents of this report, please call me at (202) 512-7101. Other major contributors to this report are Stephen P. Backhus and Daniel M. Brier, Assistant Directors, Donald C. Hahn, Evaluator-in-Charge, and Robert P. Pickering and Cheryl A. Brand, Senior Analysts. We examined in detail the complete California/Hawaii procurement file for the contract that was awarded to Aetna as well as selected portions of more recent procurements’ files. These files were from the California/Hawaii recompetition procurement, the Washington/Oregon procurement, and the region 6 procurement. We also reviewed agency files and discussed with agency officials various aspects of the procurement process. Also, we reviewed pertinent regulations governing the procurement processes in the Federal Acquisition Regulation, the Defense Federal Acquisition Regulation Supplement, and the Office of CHAMPUS Acquisition Manual. We held discussions with contract management personnel who conduct the procurements, officials who develop solicitation requirements, staff involved in the evaluations, and agency legal staff who ensure that the procurements are conducted according to applicable laws and regulations and in an equitable manner. Our review of procurement documents included (1) documents related to the planning of the CHAMPUS Reform Initiative and managed care support procurements, (2) procurement schedules showing planned and actual dates, (3) RFPs and amendments to the RFPs, (4) questions raised by offerors and agency responses, (5) documents relating to evaluation methodology, (6) evaluation criteria and scoring sheets, (7) reports of discussions with offerors, (8) internal reports, (9) reports of the evaluation boards, (10) selection reports, and (11) preaward survey reports. Because of agency concerns about compromising future procurements, we are not presenting specific information on the evaluation methodology or on the scoring and weighting systems used. Nor are we presenting information on the criteria used in the rating and scoring process. We examined proposals of individual offerors to only a limited extent and are not providing information on these proposals because it is proprietary. We interviewed the Source Selection Evaluation Board, Business Proposal Evaluation Team (BPET), and Source Selection Advisory Council chairmen involved in recent procurements as well as the selecting officials. We also interviewed several team leaders involved in evaluating the technical proposals of the California/Hawaii recompetition procurement and several members of the BPET. In addition, to assess the qualifications of evaluation members, we reviewed their resumes. In conducting our review, we examined GAO bid protest decisions involving these managed care procurements and coordinated our efforts with GAO’s Office of General Counsel, which handles these bid protests. In addition to the protest decisions, we reviewed much of the supporting documentation for decisions, including the offerors’ protests, agency reports, offerors’ comments on the agency reports, videotapes of the protest hearings, and post-hearing comments. To obtain information on their experiences with DOD managed care procurements and their views of the overall procurement process and the solicitation requirements, we interviewed officials from four offerors who had participated in recent procurements. The officials interviewed were from Aetna Government Health Plans, Inc., California Care Health Plan (Blue Cross of California), Foundation Health Federal Services, Inc., and QualMed, Inc. We also interviewed the lead agents and their staffs for regions 9 and 11 to obtain similar information. Our work was conducted at the Office of CHAMPUS, Aurora, Colorado, and at the Office of the Assistant Secretary of Defense (Health Affairs), Washington, D.C. In addition, we visited the offerors at their headquarters offices and the lead agents at their military treatment facilities. We conducted our review between March 1994 and June 1995 in accordance with generally accepted government auditing standards. CHAMPUS provides funding for health care services from civilian providers for uniformed services beneficiaries. CHAMPUS began in 1956 and was expanded in 1966 to include additional classes of beneficiaries and more comprehensive benefits. These beneficiaries eligible for CHAMPUS include dependents of active-duty members, retirees and their dependents, and dependents of deceased members. CHAMPUS has approximately 5.7 million eligible beneficiaries and has traditionally functioned much like a fee-for-service insurance program. Beneficiaries are free to select providers and are required to pay deductibles and copayments, but, unlike with most insurance programs, they are not required to pay premiums. CHAMPUS is part of the overall Military Health Services System (MHSS) that serves active- and nonactive-duty members and includes 127 hospitals and over 500 clinics worldwide. CHAMPUS beneficiaries can also obtain medical care services in military medical facilities on a space-available basis. In fiscal year 1995, the MHSS was budgeted at over $15 billion, of which $3.6 billion, or about 24 percent, was budgeted for CHAMPUS. Because of escalating costs, claims paperwork demands, and general beneficiary dissatisfaction, DOD initiated in the late 1980s, with congressional authority, a series of demonstration projects designed to more effectively contain costs and improve services to beneficiaries. One of these, known as the CHAMPUS Reform Initiative (CRI), was designed by DOD in conjunction with a consulting company. Under CRI, a contractor provided both health care and administrative-related services, including claims processing. The CRI project was one of the first to introduce managed care features to the CHAMPUS program. Beneficiaries under CRI were offered three choices—a health maintenance organization-like option called CHAMPUS Prime that required enrollment and offered enhanced benefits and low-cost shares, a preferred provider organization-like option called CHAMPUS Extra that required use of network providers in exchange for lower cost shares, and the standard CHAMPUS option that continued the freedom of choice in selecting providers and higher cost shares and deductibles. Other features of CRI included use of health care finders for referrals and the application of utilization management. The project also contained resource sharing features whereby the contractor, to reduce overall costs, could provide staff or other resources to military treatment facilities to treat beneficiaries in these facilities. Although DOD’s initial intent under CRI was to award three competitively bid contracts covering six states, only one bid—made by Foundation Health Corporation—covering California/Hawaii was received. Because of the lack of competition, DOD ended up awarding a negotiated fixed-price, at-risk contract with price adjustment features to Foundation. Although designated as fixed price, the contract contained provisions for sharing risks between the contractor and the government. Foundation delivered services under this contract between August 1988 and January 1994. Before the contract expired, DOD began a new procurement for the CRI California/Hawaii contract that resulted in the competition’s narrowing down to four bidders. In July 1993, DOD awarded a 5-1/2 year (1 half-year plus 5 option years), $3.5 billion contract to Aetna Government Health Plans, with health care services beginning on February 1, 1994. Because a bid protest was sustained on this procurement, this contract was recompeted, although Aetna was allowed to proceed with its contract until a new contract was awarded. In late 1993, in response to requirements in the DOD Appropriation Act for Fiscal Year 1994, the Department announced plans for implementing a nationwide managed care program for the MHSS that would be completely implemented by May 1997. Under this program, known as TRICARE, the United States is divided into 12 health care regions. An administrative organization, the lead agent, is designated for each region and coordinates the health care needs of all military treatment facilities in the region. Under TRICARE, seven managed care support contracts will be awarded covering DOD’s 12 health care regions. DOD estimates that over a 5-year period these contracts will cost about $17 billion. The TRICARE managed care support contracts retain the fixed-price, at-risk, and triple-option health benefit features of CRI as well as many other CRI features. An important change, however, involves including in the contract tasks to be performed by the contractor that are specific to military treatment facilities in the regions, in addition to the standard requirements. Since the announcement of DOD’s plan for implementing managed care contracts nationwide, three contracts have been awarded, as shown in table II.1. Foundation Health Federal Services, Inc. QualMed, Inc. Foundation Health Federal Services, Inc. The current schedule for awarding the remaining four contracts appears in table II.2. Region(s) RFP issue date (actual or planned) Organizations submitting best and final proposals 1. Aetna Government Health Plans, Inc. 2. BCC/PHP Managed Health Company 3. Foundation Health Federal Services, Inc. 4. QualMed, Inc. 1. CaliforniaCare Health Plans (Blue Cross of California) 2. Foundation Health Federal Services, Inc. 3. QualMed, Inc. 9,10, and 12 (recompetition) The Office of CHAMPUS, an organization within the Office of the Assistant Secretary of Defense (Health Affairs) conducts the managed care support procurements. In conducting these procurements, DOD must follow the requirements in the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement. In addition, the Office of CHAMPUS Acquisition Manual provides further guidance for conducting procurements. The major steps in the procurement process are described in this appendix. The request for proposal (RFP) contains the detailed specifications, instructions to offerors in responding to the RFP, and evaluation factors that DOD will consider in making the award. The RFP requires that offerors submit both a technical and a business (price) proposal, and offerors are told that the technical content will account for 60 percent of the scoring weight and the price, 40 percent. In preparing the technical proposal, offerors are required to address 13 different tasks: (1) health care services; (2) contractor responsibilities for coordination and interface with the lead agent and military treatment facilities; (3) health care providers’ organization, operations, and maintenance; (4) enrollment and beneficiary services; (5) claims processing; (6) program integrity; (7) fiscal management and controls; (8) management; (9) support services; (10) automatic data processing; (11) contingencies for mobilization; (12) start-up and transitions; and (13) resource support program. Experience and performance are other evaluation factors. Offerors must describe the approaches they would take in accomplishing these tasks. While offerors are not told the specific weights assigned the individual tasks, they are told their order of importance. In preparing the business proposal, offerors must provide support for both their administrative and health care prices and justify their health care prices by addressing seven cost factors over which the offerors have some control: (1) HMO option penetration rates (enrollment), (2) utilization management, (3) provider discounts, (4) coordination of benefits/third-party liability, (5) resource sharing savings, (6) resource sharing expenditures, and (7) enrollment fee revenues. Offerors must also provide trend data for costs that the offeror is considered likely to have little or no control over such as price inflation. In evaluating proposals, since these factors are considered uncontrollable, the government substitutes its own estimates for the offerors’ so that all offerors are treated equally. Offerors must also pledge an equity amount to absorb losses if health care costs exceed the amount proposed. In evaluating proposals, DOD determines whether offerors have the financial resources to meet this pledge, and the equity amount is also applied as part of the methodology in evaluating prices. Before the proposals’ due date, offerors are free to submit questions on clarification of requirements or further program information. Offerors can continue to submit questions up until the close of discussions before best and final offers are due. Upon receipt of the offerors’ proposals, a Source Selection Evaluation Board (SSEB) evaluates the technical proposals according to detailed evaluation criteria. The board size depends on the number of offerors and, in recent procurements, has numbered about 80 people. Board members are selected from offices such as the Assistant Secretary of Defense (Health Affairs), the military Surgeons General, the military treatment facilities, and the Office of CHAMPUS. A chairperson heads the board, which is divided into teams to review the various tasks and subtasks. The worksheets used in these evaluations contain both the specifications and the criteria upon which to base a judgment. A Business Proposal Evaluation Team (BPET) evaluates the business proposals. A chairperson also heads this team, which comprises about 10 people, divided between a team that primarily evaluates administrative costs and another that primarily evaluates health service costs. The team evaluating administrative costs is supported by the Defense Contract Audit Agency, which performs a cost analysis of the administrative costs bid. The team evaluating health service costs consists primarily of consultants, some of whom are actuaries. In their evaluation, they use specially developed criteria as well as a government-developed cost estimate. Another consultant ensures the financial viability of the offerors, including whether they have the fiscal capacity to absorb the amount of equity offered, which would be at risk if losses were to be incurred under the contract. A Source Selection Advisory Council (SSAC) is an oversight board that reviews the work of the SSEB and BPET and provides consultation advice to the two teams. The SSAC comprises about six executive-level personnel. DOD does not normally award a contract after the initial evaluations, although nothing precludes an award at that time. Instead, DOD notifies offerors in writing of weaknesses and deficiencies identified in the initial evaluation and prepares questions relating to them. This gives the offerors an opportunity to correct the weaknesses and deficiencies and improve their proposals. In addition to the questions provided offerors, DOD holds face-to-face discussions to clarify and resolve any outstanding issues. DOD then requests best and final offers, and offerors submit their revised proposals, including any desired price revisions. Upon receipt of the best and final offers, the SSEB and BPET evaluate revisions to the initial proposals, and the SSAC reviews the work of the two boards. DOD then completes final scoring and prepares reports of the evaluations. DOD can conduct preaward surveys before award if outstanding issues remain to be resolved. This survey can include an on-site visit to an offeror or subcontractor. A senior official, designated as the Source Selection Authority, selects the winning offeror using reports prepared by the SSEB, BPET, and SSAC. The official prepares a written report justifying the final selection. Following selection of the winning offeror, unsuccessful offerors can learn why they were not selected. Offerors are individually told of the deficiencies and weaknesses in their proposals. This can serve as the basis for preparing improved proposals for subsequent procurements. The period between contract award and the start of health care delivery is referred to as the transition period. During this period, the contractor must perform many tasks, including assembling a provider network, establishing service centers, getting the claims processing system operational, and beginning the process of enrolling beneficiaries into the HMO-like option. Throughout the evaluation process, evaluators are requested, as part of the “lessons learned” process, to identify problems or suggest potential changes to improve future procurements. The lessons learned can be as minor as correcting specification references or as major as changing evaluation procedures. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed defense health care, focusing on: (1) procurement process problems identified by the bid protest experiences; (2) the Department of Defense's (DOD) actions to improve and help ensure the fairness of the procurement process; and (3) what problems and concerns remain and whether further actions are needed. GAO found that: (1) DOD has changed its managed care procurement process to address such past problems as its failure to evaluate bidders' proposed prices according to solicitation criteria, the lack of communication between technical and price evaluators, and its failure to properly evaluate bidders' cost containment approaches; (2) although DOD has revised its evaluation methodology and has added new discussion requirements to improve future procurements and ensure better treatment of bidders, protests are likely to continue, given the vast sums of money at stake and the relatively small expense of protesting; (3) DOD may have difficulty meeting the congressional deadline for awarding all contracts by September 1996, since procurements have been taking twice as long as planned; (4) DOD has tried to make up for procurement delays by reducing its transition period after contract award for contractors to deliver health care, but this action has created major risks; and (5) DOD must establish required qualifications for evaluation board members, since their tasks have become so specialized.
Governmentwide, spending on services contracts has grown substantially over the past several years. At DHS, in fiscal year 2005 services accounted for $7.9 billion, or 67 percent, of total procurement obligations, with $1.2 billion obligated for four types of professional and management support services: program management and support, engineering and technical, other professional, and other management support (see fig. 1). More than two-thirds of DHS’s obligations for these services ($805 million) were to support the Coast Guard, OPO, and TSA. The services federal agencies buy are organized under more than 300 codes in FPDS-NG and range from basic services, such as custodial and landscaping, to more complex professional and management support services, which may closely support the performance of inherently governmental functions. Inherently governmental functions require discretion in applying government authority or value judgments in making decisions for the government; as such, they should be performed by government employees, not private contractors. The Federal Acquisition Regulation (FAR) provides 20 examples of functions considered to be, or to be treated as, inherently governmental, including determining agency policy and priorities for budget requests, directing and controlling intelligence operations, approving contractual requirements, and selecting individuals for government employment. The closer contractor services come to supporting inherently governmental functions, the greater the risk of their influencing the government’s control over and accountability for decisions that may be based, in part, on contractor work. This may result in decisions that are not in the best interest of the government, and may increase vulnerability to waste, fraud, or abuse. The FAR provides 19 examples of services and actions that may approach the category of inherently governmental because of the nature of the function, the manner in which the contractor performs the contracted services, or the manner in which the government administers contractor performance. Table 1 provides examples of these services and their relative risk of influencing government decision making. FAR and OFPP guidance address contracting for services that closely support the performance of inherently governmental functions, including professional and management support services, due to their potential for influencing the authority, accountability, and responsibilities of government officials. In particular, the guidance states that services that tend to affect government decision making, support or influence policy development, or affect program management are susceptible to abuse and require a greater level of scrutiny. Such services include advisory and assistance, which includes expert advice, opinions, and other types of consulting services. The guidance requires agencies to provide greater scrutiny of these services and an enhanced degree of management oversight. This would include assigning a sufficient number of qualified government employees to provide oversight and to ensure that agency officials retain control over and remain accountable for policy decisions that may be based in part on a contractor’s performance and work products. The potential for the loss of government management control associated with contracting for services that closely support the performance of inherently governmental functions or that should be performed by government employees is a long-standing governmentwide concern. For example, in 1981, GAO found that contractors’ level of involvement in management functions at the Departments of Energy (DOE) and Defense (DOD) was so extensive that the agencies’ ability to develop options other than those proposed by the contractors was limited. A decade later, in 1991, GAO reported that DOE had contracted extensively for support in planning, managing, and carrying out its work because it lacked sufficient resources to perform the work itself. We noted that while support service contracts are appropriate for fulfilling specialized needs or needs of a short-term or intermittent nature, the contracts we reviewed at DOE were not justified on these bases. In that same year, GAO reported that three agencies—DOE, the Environmental Protection Agency, and the National Aeronautics and Space Administration—may have relinquished government control and relied on contractors to administer some functions that may have been governmental in nature. More recently, government, industry, and academic participants in GAO’s 2006 forum on federal acquisition challenges and opportunities and the congressionally mandated Acquisition Advisory Panel noted how an increasing reliance on contractors to perform services for core government activities challenges the capacity of federal officials to supervise and evaluate the performance of these activities. The panel also noted that contracts for professional services are often performed with close contact between the federal government and contractor employees, which approaches the line between personal and nonpersonal services. Personal services are prohibited by the FAR, unless specifically authorized, and are indicated when the government exercises relatively continuous supervision and control over the contractor. Both the panel and GAO acquisition forum participants noted the large growth in contracting for complex and sophisticated services has increased attention to the appropriate use of contractors. A broad range of activities related to specific programs and administrative operations was performed under the professional and management support services contracts we reviewed. In most cases, the services provided—such as policy development, reorganization and planning activities, and acquisition support—closely supported the performance of inherently governmental functions. Contractor involvement in the nine cases we reviewed in detail ranged from providing two to three supplemental personnel to staffing an entire office. Of the $805 million obligated by the Coast Guard, OPO, and TSA in fiscal year 2005 to procure four types of professional and management support services, more than one-half of the obligations was for engineering and technical services—most of which was contracted by the Coast Guard and OPO. Figure 2 provides a breakdown of contracting dollars for the four selected professional and management support services by the three DHS components. Some of the 117 statements of work we reviewed were for services that did not closely support inherently governmental functions. These included a TSA contract for employee parking services at airports and a Coast Guard contract to maintain historic human resource records and perform data entry. However, most of the selected statements of work we reviewed did request reorganization and planning activities, acquisition support, and policy development—services that closely supported inherently governmental functions. Of the 117 statements of work that we reviewed, 71 included a total of 122 services that fell into these three categories— with reorganization and planning activities requested most often. For example, the Coast Guard obligated $500,000 for a contractor to provide services for the Nationwide Automatic Identification System to identify and monitor vessels approaching or navigating in U.S. waters. The services included advising and providing recommendations on strategies for project planning, risk management, and measuring the performance and progress of the system. Additionally, the tasks included assisting with the development of earned value management reviews, life-cycle cost estimates, and cost-benefit analyses. In another example, TSA obligated $1.2 million to acquire contractor support for its Acquisition and Program Management Support Division, which included assisting with the development of acquisition plans and hands-on assistance to program offices to prepare acquisition documents. Because contract statements of work can be broad, or contain requirements that the contractor may not ultimately perform, we conducted a more detailed review of nine cases to verify the work performed. In these nine cases, we found that contractors provided a broad array of services to sustain a range of programs and administrative operations, with the categories of reorganization and planning, policy development, and acquisition support requested most often. For example, $2.1 million in orders supporting the Coast Guard’s fleet modernization effort—the Integrated Deepwater System—included modeling and simulation services to analyze the operational performance and effectiveness of various fleet scenarios for program planning. A $42.4 million OPO order for professional, technical, and administrative services for multiple offices in DHS’s Information Analysis and Infrastructure Protection Directorate included tasks to assist in developing policies, budget formulation, and defining information technology requirements. Specifically, contractor personnel provided general acquisition advice and support to the Information Analysis and Infrastructure Protection business office, which included the management, execution, process improvement, and status reporting of procurement requests. For another office, the contractor provided an analysis of intelligence threats. A $7.9 million OPO human capital services order provided a full range of personnel and staffing services to support DHS’s headquarters offices, including writing position descriptions, signing official offer letters, and meeting new employees at DHS headquarters for their first day of work. The extent of contractor involvement in the nine case studies varied from providing two to three supplemental personnel to staffing an entire office, and in most cases contractor staff performed services on-site at DHS facilities. Figure 3 shows the type and range of services provided in the nine case studies and the location of contractor performance. A lack of staff and expertise to get programs and operations up and running drove decisions to contract for professional and management support services. While program officials generally acknowledged that these contracts closely supported the performance of inherently governmental functions, they did not assess the risk that government decisions may be influenced by, rather than independent from, contractor judgments. In the nine cases we reviewed, we found contractors providing services integral to an agency’s mission and comparable to those provided by government employees, and contracts with broadly defined requirements. These conditions need to be carefully monitored to ensure the government does not lose control over and accountability for mission related decisions. DHS has not explored ways to manage the risk of contracting for these services such as determining the right mix of government-performed and contractor-performed services or assessing total workforce deployment across the department. DHS’s human capital strategic plan notes the department has identified core mission critical occupations and plans to reduce skill gaps in core and key competencies. However, it is unclear how this will be achieved and whether it will inform the department’s use of contractors for services that closely support inherently governmental functions. The reasons most often cited by program officials for contracting for services was the need for employees—to start up a new program or administrative operation, provide specific expertise, or meet immediate mission needs. When DHS was established in 2003, it was charged with developing strategies, programs, and projects to meet a new mission while facing skill gaps in core and key competencies. For example, at TSA—a component built from the ground up—according to program officials, the lack of federal staff to provide acquisition support led to hiring contractors for its Secure Flight program. Federal staff limitations was also a reason for TSA’s contract for employee relations support services. Many TSA, DHS human capital, and Information Analysis and Infrastructure Protection program officials said that contracting for services was necessary because they were under pressure to get program and administrative offices up and running quickly, and they did not have enough time to hire staff with the right expertise through the federal hiring process. In another case, in prior work we found that when OPO was established, the office had only seven staff to serve more than 20 organizations. Since that time, OPO has expanded and adjusted the use of contractors for specific functions, such as acquisition support. In the case of TSA, the agency needed to immediately establish an employee relations office capable of serving 60,000 newly hired airport screeners—an undertaking TSA Office of Human Resources officials said would have taken several years to accomplish if they hired qualified federal employees. In another case, DHS human capital officials said there were only two staff to manage human resources for approximately 800 employees, and it would have taken 3 to 5 years to hire and train federal employees to provide the necessary services. Similarly, the Coast Guard, a more established agency, lacked the personnel needed to address new requirements for its competitive sourcing program. According to Coast Guard program officials, only one federal employee was in place when the new requirements were established. An acquisition plan for modeling and simulation services in support of the Coast Guard’s Integrated Deepwater System cited the need for technological expertise as one of the reasons for hiring contractors. According to program officials, contracting for such technological capabilities is routine at the Coast Guard. Several officials also described a perception of a management preference for contracting. For example, an OPO contracting officer said governmentwide strategies to use contractors influenced program decisions to award services contracts. TSA program and senior officials also said decisions to contract were in keeping with a conscious decision to build a lean organization. For example, in prior work, we found that TSA contracted extensively to manage human resource needs, develop and manufacture screening equipment, and provide the information technology systems it uses to manage day-to-day operations. In fact, such service contracts represented about 48 percent of TSA’s fiscal year 2003 budget. To ensure the government does not lose control over and accountability for mission-related decisions, long-standing federal procurement policy requires attention to the risk that government decisions may be influenced by, rather than independent from, contractor actions when contracting for services that closely support inherently governmental functions. Distinguishing roles and responsibilities of contractors and government employees and carefully defining requirements for contractor services become especially important when contracting for professional and management support services since contractors often work closely with government employees to provide these services. To manage risk, participants in GAO’s acquisition forum stated that agencies need to determine the right mix of government-performed and contractor- performed work in particular settings, and that planning for contracting outcomes and measurable results is a critical element in managing a multisector workforce of government employees and contractors. The nine cases we reviewed provided examples of contractors performing services integral to an agency’s mission and comparable to those performed by government employees, contractors providing ongoing support, and broadly defined contract requirements—conditions that need to be carefully monitored to ensure the government does not lose control over and accountability for mission-related decisions. In seven of the nine cases, contractors provided services that were integral to DHS’s mission or comparable to work performed by government employees. For example: A contractor directly supported DHS efforts to hire federal employees, including signing offer letters. The contractor for the component’s employee relations office provided advice to supervisors on cases, a function also performed by federal employees in that office. A contractor provided acquisition advice and support to the Information Analysis and Infrastructure Protection Directorate business office, working alongside federal employees and performing the same tasks. In some of these cases officials said contractors were used to fill staff shortages. We also found that government employees may have supervised contractor employees. For example, one contractor performed mission- related budget, program management, and acquisition services and was located at government operations centers to provide opportunities for direct review of the contractor’s activities. This type of close supervision of contractor personnel may constitute personal services—a contracting arrangement that is prohibited by the FAR, unless specifically authorized. In all nine cases, the contractor provided services that lasted for more than 1 year. Given the risk of contracting for selected services, it is appropriate to periodically reexamine who—private companies or federal employees—should perform certain services. However, in five of the nine cases, the original justification for contracting—to quickly establish a new office or function—had changed, but the components extended or recompeted services without considering this change. For example: To establish a competitive sourcing program, the Coast Guard hired a contractor to provide budget, policy, acquisition support, and reorganization and planning for more than 5 years. These services have been extended through August 2009. OPO established a temporary “bridge” arrangement without competition to avoid disruption of critical support including budget, policy, and intelligence services. Although this arrangement was intended to be temporary, the order was modified 20 times and extended for almost 18 months. Subsequently, these services were competed and awarded to the original contractor under six separate contracts. DHS provided information stating that five of the six contracts expire by the end of September 2007. However, as of August 2007 DHS had yet to provide a plan for carrying out these services in the future. In another OPO case, a contractor was hired to develop a strategic plan for the US-VISIT program. While the task was completed in less than a year, the contractor continued to provide related services in two subsequent orders. Continuing to contract for these types of services is particularly risky since the initial contracting decisions did not include an assessment of risk. Describing in detail the work to be performed under a contract helps to minimize the risk of paying too much for services provided, acquiring services that do not meet needs, or entering too quickly into sensitive arrangements. Well-defined contract requirements can also help minimize the risk of contractors performing inherently governmental functions. Defining requirements is a part of the acquisition planning process and prior GAO work has emphasized the importance of clearly defined requirements to obtain the right outcome. Broadly defined requirements were also apparent in the 117 statements of work that we reviewed. For example, at TSA we found multiple statements of work requesting a similar set of services—including acquisition and strategic planning, contingency planning, program oversight, and government cost estimating—in support of different program offices. In six of our nine case studies, the requirements as written in the statements of work were often broadly defined. In four cases, the statements of work lacked specific details about activities that closely support inherently governmental functions. For example, the initial statement of work for a $7.9 million OPO order for human resources support broadly stated that the contractor would rank candidates for DHS positions. Without specifying how the contractor was to perform this task, it was unclear how OPO would hold the contractor accountable for outcomes. The later contract specified how the contractor was to rank candidates, including the criteria, processes, and policies to be used. In the other two cases, the statements of work included an indiscriminate mix of services. A $7.9 million TSA contract included program management support activities, including professional and technical advice, strategic planning, performance monitoring, conference support, briefing preparation, project documentation, technical research and analysis, and stakeholder relations. Some of these activities fit the description of advisory and assistance services. Similarly, a single $42.4 million OPO order included 58 tasks to provide a diverse range of services throughout the Information Analysis and Infrastructure Protection Directorate in support of over 15 program offices and 10 separate directoratewide administrative efforts. Services included providing strategic communications planning expertise and representing the directorate as a member of the DHS-wide Homeland Security Operations Center, providing intelligence analysis for Immigration and Customs Enforcement and Customs and Border Protection, supporting administrative functions such as acquisition planning and human capital management, and defining information technology requirements for the directorate. Other services included helping respond to congressional and Freedom of Information Act requests and preparing budget justification documents and related briefing materials. Several program officials noted that the statements of work did not accurately reflect the program’s needs or the work the contractors actually performed. For example, one statement of work for a $1.7 million Coast Guard order included advisory and assistance services. However, program officials said the contractor never provided these services. Another Coast Guard statement of work for a $1.3 million order initially included developing policy, conducting cost-benefit analyses, and conducting regulatory assessments, though program officials told us the contractors provided only technical regulatory writing and editing support. The statement of work was revised in a later contract to better define requirements. Contracting officers and program officials for the nine case studies generally acknowledged that their professional and management support services contracts closely supported the performance of inherently governmental functions. However, none assessed whether these contracts could result in the loss of control over and accountability for mission- related decisions. DHS has not explored ways to address the risk of contracting for these services such as determining the right mix of government performed or contractor performed services or assessing total workforce deployment across the department. Federal acquisition guidance highlights the risk inherent in service contracting—particularly those for professional and management support—and federal internal control standards require assessment of risks. Internal control standards provide a framework to identify and address areas at greatest risk of mismanagement, waste, fraud, and abuse. OFPP staff we met with also emphasized the importance of assessing the risk associated with contracting for services that closely support the performance of inherently governmental functions and establishing effective internal management controls to ensure agency staff are aware of this risk consistent with the OFPP guidance. While DHS acquisition planning guidance requires identification of such acquisition risks as cost, schedule, and performance, or political or organizational factors, it does not address the specific risk of services that closely support the performance of inherently governmental functions. Prior GAO work has found that cost, schedule, and performance—common measures for products or major systems—may not be the most effective measures for assessing services. DHS’s human capital strategic plan notes the department has identified core mission critical occupations and plans to reduce skill gaps in core and key competencies. However, prior GAO work found that DHS had not provided details on the specific human capital resources needed to achieve its long-term strategic goals. Human capital planning strategies should be linked to current and future human capital needs, including the total workforce of federal employees and contractors; its deployment across the organization; and the knowledge, skills, and abilities needed by agencies. Deployment includes the flexible use of the workforce, such as putting the right employees in the right roles according to their skills, and relying on staff drawn from various organizational components and functions and using “just-in-time” or “virtual” teams to focus the right talent on specific tasks. We have also noted the importance of focusing greater attention on which types of functions and activities should be contracted out and which ones should not while considering other reasons for using contractors, such as a limited number of federal employees. DHS’s human capital plan is unclear as to how this could be achieved and whether it will inform the department’s use of contractors for services that closely support the performance of inherently governmental functions. None of the program officials and contracting officers we spoke with were aware of the federal acquisition policy requirement for enhanced oversight of contracts for services that closely support the performance of inherently governmental functions. Further, few believed that their professional and management support service contracts required an enhanced level of scrutiny. For the nine cases we reviewed, the level of oversight DHS provided did not always ensure accountability for decisions—as called for in federal guidance—or the ability to judge whether contractors were performing as required. DHS’s Chief Procurement Officer and Inspector General each have ongoing efforts to improve procurement oversight. These efforts have the potential to include reviews of contracting for services that closely support the performance of inherently governmental functions. The FAR and OFPP require agencies to provide enhanced oversight of contracts for services that closely support the performance of inherently governmental functions to ensure these services do not compromise the independence of government decision making. DHS contracting officers and program officials from our nine case studies were unaware of these oversight policies. While these officials acknowledged the professional and management support services provided under these contracts closely supported the performance of inherently governmental functions, most did not believe enhanced oversight of the contracts was warranted. According to DHS contracting officers and program officials, cost, complexity, and visibility are risk factors that trigger the need for enhanced oversight. Neither these officials nor DHS acquisition planning guidance cite services that closely support the performance of inherently governmental functions as a risk factor. In five of the nine cases we reviewed, contract documents outlined routine oversight responsibilities for the Contracting Officer’s Technical Representative (COTR) but did not address the need for enhanced oversight as a result of the type of service. Prior GAO work has found that because services involve a wide range of activities, management and oversight of service acquisitions may need to be tailored to the specific circumstances, including developing different measures of quality or performance. In four of the case studies, contracting officers and program officials believed their experience and training enabled them to determine whether or not enhanced oversight was needed. However, none of the training policies and documents we reviewed—including DHS’s directive for COTR certification and the Defense Acquisition University’s training curriculum—alerted COTRs to federal policy requiring enhanced oversight for contracts that closely support inherently governmental functions or to the risk of such contracts. Federal acquisition guidance requires agencies to retain control over and remain accountable for decisions that may be based, in part, on a contractor’s performance and work products. This includes making sound judgments on requirements, costs, and contractor performance. Both the FAR and OFPP policy state that when contracting for services— particularly for professional and management support services that closely support the performance of inherently governmental functions—a sufficient number of qualified government employees assigned to plan and oversee contractor activities is needed to maintain control and accountability. However, we found cases in which the components lacked the capacity to oversee contractor performance due to limited expertise and workload demands (see table 2). These deficiencies may have resulted in a lack of control over and accountability for decisions. Prior GAO work has shown similar examples of oversight deficiencies that can contribute to poor outcomes. For example, in work examining contracts undertaken in support of response and recovery efforts for Hurricanes Katrina and Rita, we found that the number of monitoring staff available at DHS was not always sufficient or effectively deployed to provide oversight. Similarly, in work at DOD, we have found cases of insufficient numbers of trained contracting oversight personnel, and cases in which personnel were not provided enough time to complete surveillance tasks, in part due to limited staffing. Establishing measurable outcomes for services contracts and assessing contractor performance are necessary to ensure control and accountability. DHS components were limited in their ability to assess contractor performance in a way that addressed the risk of contracting for professional and management support services that closely support the performance of inherently governmental functions. Assessing contractor performance requires a plan that outlines how services will be delivered. However, none of the related oversight plans and contract documents we reviewed contained specific measures for assessing contractors’ performance of these services. DHS’s Chief Procurement Officer and the Inspector General each have ongoing efforts to assess DHS contract management. The Chief Procurement Officer is in the process of implementing an acquisition oversight program, which is intended to assess (1) compliance with federal acquisition guidance, (2) contract administration, and (3) business judgment. This program was designed with flexibility to address specific procurement issues, as necessary, and is based on a series of reviews at the component level. For example, the on-site review incorporates assessments of individual procurement actions. These reviews have potential to include contracting for services that closely support inherently governmental functions. The Inspector General also has recently increased its procurement oversight (see app. III). Common themes and risks emerged from this work, primarily the dominant influence of expediency, poorly defined requirements, and inadequate oversight that contributed to ineffective or inefficient results and increased costs. Inspector General reviews also noted that many DHS procurement offices reported that their lack of staffing prevents proper procurement planning and severely limits their ability to monitor contractor performance and conduct effective contract administration. While these findings have broad application to services, OFPP Policy Letter 93-1 encourages the Inspectors General to also conduct vulnerability assessments of services contracting—which would include services that closely support inherently governmental functions— to ensure compliance with related guidance. When DHS was established in 2003, it faced an enormous challenge to quickly set up numerous offices and programs that would provide wide- ranging and complex services critical to ensuring the nation’s security. With limited staffing options, the department relied on contractors to perform mission-related services that closely support the performance of inherently governmental functions. However, the tasks assigned to contractors were not always clearly defined up front, and the breadth and depth of contractor involvement were extensive in some cases. Four years later, the department continues to rely heavily on contractors to fulfill its mission with little emphasis on assessing the risk and ensuring management control and accountability. Given its use of contractors to provide selected services, it is critical for DHS to strategically address workforce deployment and determine the appropriate role of contractors in meeting its mission. Until the department emplaces the staff and expertise needed to oversee selected services, it will continue to risk transferring government responsibility to contractors. To improve the department’s ability to manage the risk of selected services that closely support inherently governmental functions as well as government control over and accountability for decisions, we recommend that the Secretary of Homeland Security implement the following five actions: establish strategic-level guidance for determining the appropriate mix of government and contractor employees to meet mission needs; assess the risk of selected contractor services as part of the acquisition planning process, and modify existing acquisition guidance and training to address when to use and how to oversee those services in accordance with federal acquisition policy; define contract requirements to clearly describe roles, responsibilities, and limitations of selected contractor services as part of the acquisition planning process; assess program office staff and expertise necessary to provide sufficient oversight of selected contractor services; and review contracts for selected services as part of the acquisition oversight program. We provided a draft of this report to OMB and DHS for review and comment. In written comments, DHS generally concurred with our recommendations and provided information on what action would be taken to address them. The department’s comments are reprinted in appendix IV. OMB did not comment on the findings or conclusions of this report. DHS concurred with three of our recommendations, and partially concurred with the other two. Regarding the first recommendation, to establish strategic guidance for determining the appropriate mix of government and contractor employees, DHS agreed and stated that its Chief Human Capital and Chief Procurement Officers plan to initiate staffing studies and recommend the number and skill sets of federal employees required to successfully manage its long-term projects and programs. We agree that such action should provide the basis for developing a strategic approach to managing the risk of contracting for selected services. DHS partially concurred with our recommendation to assess the risk of selected contractor services as part of the acquisition planning process and to modify existing acquisition guidance and training accordingly. DHS agreed that its training for contracting officers and contracting officer’s technical representatives should include the guidance in OFPP Policy Letter 93-1. DHS stated the Chief Procurement Officer plans to emphasize this requirement to the component Heads of Contracting Activity and to department contracting personnel and to coordinate with the Defense Acquisition University to ensure that guidance is also included in its training. However, DHS stated that its Acquisition Planning Guide already provides for the assessment of risk. Our review of the acquisition planning guidance found that it addresses risk factors such as cost, schedule, and performance, but it does not address the specific risk of services that closely support the performance of inherently governmental functions. As we note in our report, these types of services carry additional risk that should be considered when making contracting decisions. Concerning the third recommendation, to define contract requirements to clearly describe roles, responsibilities, and limitations of selected contractor services, DHS concurred and anticipated that the risk of contracting for selected services will be appropriately addressed more often in the future. However, DHS did not specify related initiatives. Because developing well-defined requirements can be challenging but is essential for obtaining the right outcome, we believe this effort will require sustained attention from DHS. DHS also concurred with our fourth recommendation, to assess the program office staff and expertise necessary to provide sufficient oversight of selected contractor services. DHS stated that this process has already begun at TSA and that it plans to proceed on a larger-scale initiative as part of its overall human capital planning. With respect to our recommendation that DHS review selected services contracts as part of the acquisition oversight program, DHS agreed that these types of services require special assessment, but stated that the Chief Procurement Officer will direct a special investigation on selected issues as needed rather than as part of the routine acquisition oversight reviews. We did not intend that the formal oversight plan be modified. Rather, we recognize that the acquisition oversight program was designed with flexibility to address specific procurement issues as necessary. We leave it to the discretion of the Chief Procurement Officer to determine how to implement the recommendation to ensure proper oversight. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution for 30 days from the report date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Director of the Office of Management and Budget, and other interested congressional committees. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have questions about this report or need additional information, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff making key contributions to this report were Amelia Shachoy, Assistant Director; Katherine Trimble; Jennifer Dougherty; Cardell Johnson; Matthew Saradjian; David Schilling; Karen Sloan; Julia Kennon; Alison Martin; Noah Bleicher; and Kenneth Patton. To describe the types of services the Department of Homeland Security (DHS) requested through these contracts, we compiled information from the Federal Procurement Data System-Next Generation (FPDS-NG) on procurement spending at DHS and its components for fiscal years 2005 and 2006. To supplement our review of information from FPDS-NG, we reviewed 117 statements of work and conducted more detailed reviews of nine cases from fiscal year 2005—the year for which the most complete data were available at the time we began our review. For the 117 statements of work, we used federal acquisition guidance on services that closely support the performance of inherently governmental functions as criteria to describe the types of services DHS requested. Within those services, we selected three broad categories for more detailed review— reorganization and planning activities, policy development, and acquisition support. To identify potential risk and the extent to which DHS considered risk when deciding to use contracts for selected professional and management support services that closely support the performance of inherently governmental functions, and to assess DHS’s management and oversight of contracts for these types of services, we conducted a detailed review of nine case studies—three at each component. For each case study, we reviewed contract documentation, including available acquisition plans, oversight plans, and records, and interviewed procurement and program officials at the three components about the decision to use contractors and contractor oversight, including any processes and guidance used. We interviewed contractors for seven of the nine cases about their working relationship with the component offices, the work performed, and the oversight provided by the component. For the other two cases, we requested interviews, but the contractors were not available. We also spoke with the heads of contracting activity at the Office of Procurement Operations (OPO) and the Transportation Security Administration (TSA), the Chief of the Office of Procurement Policy at the Coast Guard, and staff at the Office of Management and Budget’s (OMB) Office of Federal Procurement Policy (OFPP). To develop criteria for services that closely support the performance of inherently governmental functions, we reviewed Federal Acquisition Regulation (FAR) subpart 7.5 on inherently governmental functions and FAR section 37.114 on special acquisition requirements, and the Office of Management and Budget’s Office of Federal Procurement Policy Letter 93-1 on management oversight of service contracts. To select services to review, a GAO contracting officer reviewed the FPDS-NG Product and Service Codes Manual and identified over 30 services considered to closely support the performance of inherently governmental functions across the following categories: research and development; special studies and analyses; professional, administrative, and management support services; and education and training. To confirm the selection, we then compared each of the services to federal acquisition guidance that describes inherently governmental functions and services approaching inherently governmental functions. On the basis of this review, we gathered and analyzed data from the FPDS-NG on DHS’s fiscal year 2005 obligations for 29 services. Sixteen of the 29 services fell into the professional, administrative, and management support services category. From this category, we selected the 4 services for which DHS obligated the most in fiscal year 2005—program management and support services, engineering and technical services, other professional services, and other management support services. We reviewed these criteria with DHS acquisition policy and oversight officials, focusing on the link between the 4 selected services and federal acquisition guidance. Finally, we selected the three DHS components, excluding the Federal Emergency Management Agency (FEMA), that had obligated the most for those services at the time we began our review—the Coast Guard, OPO, and TSA. To select contracts to review, we compiled data from FPDS-NG on all fiscal year 2005 contract actions as of the time we began our review for the 4 services at the three components. Using the brief contract description available through FPDS-NG, we used FAR guidance to identify services that closely support the performance of inherently governmental functions to select a total of 125 statements of work for the 4 services: 42 from Coast Guard, 43 from OPO, and 40 from TSA (see table 3). Of the 125 requested, we received 117 statements of work within the 11-week time period we allowed. In some cases, DHS was unable to locate files or FPDS-NG entries were unclear or incorrect. Using the more detailed description of services included in the 117 statements of work, we again used FAR guidance to identify services that appeared to closely support the performance of inherently governmental functions to select three contracts from each component on which to perform a total of nine case studies. The nine cases we reviewed in detail represented the 4 types of professional and management support services and ranged in value from $1.3 million to $42.4 million. Table 4 provides details on the case study selection process and the cases reviewed. We conducted our review between April 2006 and August 2007 in accordance with generally accepted government auditing standards. Federal Acquisition Regulation section 7.503 provides examples of inherently governmental functions and services or actions that are not inherently governmental, but may approach being inherently governmental functions based on the nature of the function, the manner in which the contractor performs the contract, or the manner in which the government administers contractor performance. These examples are listed in tables 5 and 6 below. GAO designated DHS as a high-risk organization in 2003 due to the serious implications for our national security that result from the management challenges and program risks associated with implementing and transforming the department from 22 agencies. In addition, the DHS Inspector General has identified major management challenges facing the department, which are updated annually as required by the Reports Consolidation Act of 2000. Acquisition and contract management are included as a management challenge identified by the Inspector General. Other management challenges identified by the Inspector General include catastrophic disaster response and recovery including FEMA activities and grants management; financial management; information technology management, including the National Asset Database to coordinate infrastructure protection activity; border security; transportation security; and trade operations and security mainly through the work of customs and border protection. The Inspector General provided oversight coverage of DHS and the identified management challenges during fiscal years 2005 and 2006 through audits, inspections, memos, management reports, and investigations. The Inspector General issued 106 reports during fiscal year 2005 and closed 639 investigations. In fiscal year 2006 the Inspector General issued 133 reports and closed 507 investigations. As a result, the Inspector General reported over $271.7 million in questioned costs, unsupported costs, and better use of funds, and over $157 million in recoveries, fines, and restitutions resulting from investigations over the 2-year period. On August 29, 2005, Hurricane Katrina hit the Gulf Coast states, causing catastrophic damage to the region, and by September 2005, Congress had passed legislation that provided approximately $63 billion for disaster relief, the bulk of which went to the Federal Emergency Management Agency. Consequently, the DHS Inspector General issued a significant number of reports that addressed FEMA operations and grantees (see fig. 4). The DHS Inspector General increased the number of reports related to contract and acquisition management from 3 in fiscal year 2005 to 32 in fiscal year 2006 (see fig. 4). These reports ranged from audits of specific contracts to overall acquisition management by DHS. For example, the Inspector General reviewed individual contracts for disaster recovery from Hurricane Katrina, including debris removal, and also provided a review of the weaknesses in the procurement and program management operations throughout DHS. In addition to the DHS Inspector General’s reports, the Defense Contract Audit Agency increased the number of DHS contract audits from 83 reports to 121 reports over the same fiscal years.
In fiscal year 2005, the Department of Homeland Security (DHS) obligated $1.2 billion to procure four types of professional and management support services--program management and support, engineering and technical, other professional, and other management support. While contracting for such services can help DHS meet its needs, there is risk associated with contractors closely supporting inherently governmental functions--functions that should be performed only by government employees. This report (1) describes the contracted services, (2) identifies potential risk and the extent to which DHS considered risk when deciding to contract for these services, and (3) assesses DHS's approach to managing and overseeing these services. GAO analyzed 117 judgmentally selected statements of work and 9 cases in detail for contracts awarded in fiscal year 2005 by the Coast Guard, the Office of Procurement Operations (OPO), and the Transportation Security Administration (TSA). More than half of the 117 statements of work that GAO reviewed provided for reorganization and planning activities, policy development, and acquisition support--services that closely support the performance of inherently governmental functions. Other such services supporting a broad range of programs and operations at Coast Guard, OPO, and TSA included budget preparation, regulation development, and employee relations. Decisions to contract for professional and management support services were driven by the need for staff and expertise to get programs and operations up and running. However, for the nine cases we reviewed, program officials did not assess the risk that government decisions may be influenced by, rather than independent from, contractor judgments. These cases included services that have the potential to increase this risk. For example, contractors directly supported DHS missions and performed on an ongoing basis work comparable to that of government employees. Most of the nine contracts also lacked detail or covered a wide range of services. Conditions such as these need to be carefully monitored to ensure the government does not lose control over and accountability for mission-related decisions. DHS has not explored ways to manage the risk of these contractor services, such as through total workforce deployment across the organization. The level of oversight DHS provided did not always ensure accountability for decisions or the ability to judge whether the contractor was performing as required. Federal acquisition policy requires enhanced oversight of contracts for services that can affect government decision making, policy development, or program management. While contracting officers and program officials acknowledged their professional and management support services contracts closely supported inherently governmental functions, they did not see a need for increased oversight. Insufficient oversight increases the potential for a loss of management control and the ability to ensure intended outcomes are achieved.
The District of Columbia Jail and CTF house inmates awaiting trial or who have been sentenced for misdemeanors. The Jail was opened in 1976, and from 1985 to July 2002, a court order limited the population to 1,674 inmates. Since July 2002 the population has grown, and during March 2004, the facility had an average daily population of 2,357. In addition to serving as an overflow facility, the CTF houses pregnant inmates, inmates with disabilities who need medical services, inmates in witness protection, and inmates who need to be separated from the general inmate population. Opened in 1992, the CTF is operated by a private company, the Corrections Corporation of America (CCA), under a contract with DoC. During March 2004, the CTF had an average daily population of 1,197. In 1995, the U.S. District Court for the District of Columbia removed medical services at the Jail from DoC’s control, placing these services under the temporary supervision of a court-appointed Receiver. This removal resulted from the District of Columbia’s failure to address problems identified in two lawsuits brought against the Jail in 1971 and 1975, which alleged that DoC was failing to provide minimally adequate medical care for inmates. Before it terminated the receivership in 2000, the Court hired a national expert in correctional health care to conduct an independent quality review of medical services provided by CCHPS to inmates at the Jail. DoC subsequently contracted directly with this expert to help develop a set of performance assessment instruments for reviewing CCHPS’s clinical services and monitoring activities and to conduct quarterly on-site reviews of CCHPS. DoC has a constitutional obligation to ensure that medical care is provided to inmates in its custody, and DoC’s contract with CCHPS requires CCHPS to provide comprehensive medical services to all inmates assigned to the Jail and the CTF and to establish a quality improvement program to monitor the quality of medical services it provides. In some areas, particularly the assessment of inmates’ health when they are admitted to the facilities, the contract lists specific services that CCHPS must provide, such as certain diagnostic tests. In other areas, such as services for inmates with chronic conditions, the requirement to provide care is less detailed. In addition to describing services that CCHPS is required to provide, the contract states that DoC can impose monetary damages on CCHPS if it does not meet 12 specific requirements. (See app. II for a description of the contract requirements that are linked to monetary damages.) Compliance with the requirements is to be determined through monitoring by DoC or its designee. The contract with DoC also requires that CCHPS acquire and maintain accreditation for its medical services. The Jail’s medical services are accredited by the National Commission on Correctional Health Care (NCCHC), while the CTF is accredited by the American Correctional Association (ACA). NCCHC and ACA, both national, not-for-profit organizations, offer voluntary accreditation processes for medical services provided in correctional facilities; relatively few jails nationwide are accredited by these organizations. NCCHC accredits only a correctional facility’s medical services, while ACA accredits all aspects of the correctional facility, including medical services. Both organizations have developed detailed accreditation standards that include, for example, specific elements that are required in an inmate’s initial medical assessment and in a facility’s quality improvement program. The accreditation process for both organizations includes on-site inspections of the facility every 3 years and submission of an annual report certifying that the facility continues to be in compliance with the accreditation standards. During on-site inspections, inspectors interview staff, review documentation provided by the facility, and examine a sample of inmate medical records. NCCHC and ACA inspectors submit their findings to expert panels, who make the accreditation decisions. One component of the quality improvement program required by both NCCHC and ACA is a grievance system that allows inmates an opportunity to question or complain about their care. Inmates at the Jail or the CTF who have concerns about medical services can complete a grievance form and submit it to the warden’s office in their facility. The warden’s staff records the grievance in their system and then forwards it to CCHPS. CCHPS’s medical director and quality improvement coordinator review the grievance and work with the clinicians involved to determine if the inmate’s complaint is valid and, if so, how it should be addressed. If it is determined that an inmate needs to receive care, CCHPS schedules an appointment. After CCHPS has reviewed the grievance, it sends a report to the warden, who then provides a response to the inmate. In June 2000, we testified before the House Committee on Government Reform, Subcommittee on the District of Columbia, about the provision of medical services at the Jail. We reported that the per inmate cost at the Jail was higher than those at the two other jurisdictions reviewed, and that services and staffing levels also exceeded those of the other jurisdictions. We also found that there were no specific criteria that determine an acceptable level of medical service and staffing at a jail. Rather, the range of services was a function of many local factors, including the specific demands and constraints placed on the facility’s service delivery system. As required by the contract, CCHPS provides a broad range of medical services to Jail and CTF inmates, and the types of services CCHPS provides at the Jail have not changed significantly over the life of the contract. In addition, CCHPS assists DoC in helping inmates obtain services beyond those included in CCHPS’s contract, such as emergency and specialty care that cannot be provided at the Jail or the CTF. CCHPS also assists DoC in its efforts to work with other District of Columbia agencies and community providers to link soon-to-be-released inmates in need of medical services with services in the community. As part of its contract with DoC, CCHPS has also developed a system to monitor the quality of the medical services it provides to inmates. A key component of this program is quarterly analyses of random samples of inmate medical records to measure how consistently CCHPS delivers required services to inmates. As required by the contract, CCHPS provides a broad range of medical services to Jail and CTF inmates, including primary care services such as sick call and chronic care; mental health care; and specialty care, such as dental and orthopedic services. (See table 1 for a description of these services.) At intake, all inmates receive a health assessment—referred to as an intake screening—that screens for physical and mental health conditions. The inmates receive a physical examination and are asked about current and past health problems, substance abuse, and medication use. In addition, they receive a chest x-ray and skin test to identify possible tuberculosis. As part of the mental health screening, inmates are asked a series of questions. If inmates respond positively to any of these questions, or if they are a juvenile or in jail for the first time, they are referred for a comprehensive mental health assessment. Based on the findings of the intake screening, inmates in need of medical care may receive treatment in a chronic or specialty care clinic, receive therapy for mental health problems, or be placed in one of two specialized mental health units. According to CCHPS officials, in 2002 they conducted an average of 1,654 intake screenings each month. About 20 percent of these inmates were referred to a chronic care clinic, and about 34 percent were referred for further mental health assessment. There have been no significant changes in the types of medical services provided by CCHPS since the start of its contract with DoC. However, there have been some minor changes, including modifications to on-site specialty clinics. For example, in 2001, the requirement for an oral surgery clinic was deleted from the contract, and more recently CCHPS combined the ophthalmology and optometry clinics. In addition, CCHPS began offering endocrinology and infectious disease clinics on-site—even though they are not required by the contract—to improve inmates’ access to these services and continuity of care. CCHPS officials had expected the consolidation of medical services at the Jail and the CTF to result in some service efficiencies, such as combining the on-site specialty clinics offered at both facilities; however, CCHPS and DoC officials told us it has not been feasible to easily move inmates between facilities because of security issues. CCHPS therefore continues to offer all on-site specialty clinics at both facilities. When inmates need medical services that cannot be provided at the Jail or the CTF, CCHPS refers them to providers in the community. These off-site services, including emergency care and certain specialty services, are not part of the CCHPS contract; instead, DoC has an agreement with the District of Columbia Department of Health (DoH) to provide services to inmates through Greater Southeast Community Hospital. When Greater Southeast is not able to provide the needed services, it in turn refers the inmates to other members of the DC Healthcare Alliance and other community providers. DoC pays for all off-site services through an interagency agreement with DoH; in 2003 there were 4,169 appointments for inmates off-site. Although DoC’s contract with CCHPS does not specify that CCHPS provide discharge planning services to inmates, NCCHC accreditation standards include discharge planning activities. Both CCHPS and DoC have made efforts to plan for the release of inmates with medical conditions and to link them to community-based medical services. For example, CCHPS’s policies require that inmates receive a 2-week supply of medications at the time of their release. In addition, CCHPS provides support to DoC’s collaboration with the District of Columbia Department of Mental Health (DMH) to help Jail inmates obtain access to community mental health services when they are released. CCHPS supports DoC’s and DoH’s discharge planning efforts to link inmates who have certain chronic and communicable diseases, such as tuberculosis, to community-based medical services. In addition, through a joint program of DoH’s HIV/AIDS Administration and DoC, Family and Medical Counseling Services, Inc. (FMCS), a community-based provider, offers HIV testing and links HIV-positive inmates to services in the community when they are released. CCHPS refers inmates requesting an HIV test to FMCS and provides FMCS with office space, computers, and access to the inmate’s electronic medical record in the CCHPS system. As part of its contract with DoC, CCHPS is responsible for monitoring the quality of the medical services it provides to Jail and CTF inmates, and CCHPS has established a quality improvement program to fulfill this responsibility. A key component of the program is a quarterly analysis of random samples of inmate medical records using standardized performance assessment instruments. These quarterly analyses provide CCHPS with quantitative data about its performance in certain areas. Each assessment instrument measures CCHPS’s performance of a specific set of activities; these activities are generally more detailed than the requirements described in the contract. (See app. III for a summary description of the instruments.) Using the samples of medical records and other documentation to complete the performance assessment instruments, CCHPS clinicians determine how consistently CCHPS delivers required services to inmates. Currently, there are 23 performance assessment instruments, 20 of which measure medical services provided to inmates in various service areas. For example, the intake services instrument includes a measurement of the percentage of inmates who received a chest x-ray for tuberculosis within 24 hours of admission. The remaining 3 instruments measure the extent to which CCHPS has conducted other components of its quality improvement program, such as validating that clinical staff are licensed. In addition to these quarterly analyses of medical services, CCHPS’s quality improvement program also includes other reviews, such as annual reviews of urgent care and radiological safety procedures, monthly reviews of inmate grievances and of any inmate deaths, and ongoing reviews of infection control activities. The program also requires CCHPS to conduct at least two in-depth studies a year, each of which focuses on a specific issue, such as a medical service problem that has been identified by the quarterly analyses. DoC has developed several mechanisms to oversee CCHPS’s delivery of medical services to inmates and enforce CCHPS’s compliance with the contract. For example, DoC’s contract with CCHPS gives DoC the authority to impose monetary damages if CCHPS fails to meet any of 12 requirements specified in the contract, most of which relate to CCHPS’s performance in providing key medical services. For most of these requirements, the contract authorizes DoC to impose the damages if CCHPS fails to deliver the required service to a minimum percentage of inmates—for example if CCHPS does not conduct an intake screening within 24 hours for 95 percent of inmates. (See app. II for additional information on the contract requirements that are linked to monetary damages.) Some of the requirements relate to CCHPS’s staff, including ensuring that staff have required licenses and credentials. In addition, the contract contains a requirement that CCHPS have an infection control program approved by DoC. DoC, or its designee, is responsible for determining CCHPS’s compliance with these 12 contract requirements. To further assist DoC in overseeing CCHPS’s delivery of services, the contract also stipulates that CCHPS will submit quarterly and annual progress reports to DoC. These progress reports are to include a description of quality problems, such as those identified by CCHPS’s quality improvement program or the independent reviewer, and actions taken to correct them. DoC also requires CCHPS to maintain accreditation of its services. In addition, DoC staff responsible for oversight of the contract are frequently on-site at the Jail and the CTF observing the contractor, and, as of May 2004, DoC had plans to begin jointly conducting the quarterly analyses of inmate medical records with CCHPS. Furthermore, DoC’s independent reviewer conducts quarterly reviews of CCHPS’s activities. Each review consists of two principal components. First, the independent reviewer checks the accuracy of CCHPS’s internal use of the standardized performance instruments. To do this, he uses the same performance assessment instruments that CCHPS uses in its quality improvement program to examine a sample of the analyses CCHPS has completed, and assesses whether CCHPS accurately characterized the medical records studied. Second, in addition to validating CCHPS’s analyses, the independent reviewer uses the performance instruments to independently assess the quality of CCHPS’s services by analyzing a separate random sample of inmate medical records in selected service areas, such as mental health services. While CCHPS uses the performance assessment instruments as a quality improvement vehicle, the independent reviewer’s use of these instruments contributes to his assessment of whether CCHPS is meeting its contractual obligations. However, the independent reviewer does not specifically evaluate CCHPS’s compliance with the contract requirements associated with monetary damages. As part of his review, the independent reviewer also assesses other components of CCHPS’s quality improvement program, visits the medical units at the Jail and the CTF, and interviews CCHPS staff. After conducting the review, the independent reviewer provides DoC with a written report describing his general findings, including service areas in which CCHPS excels or needs to improve. Since August 2000, the independent reviewer has conducted 14 quarterly on-site reviews of CCHPS. Most available evidence indicates that CCHPS has generally complied with the contract, but DoC has not exercised sufficient oversight to be assured that problems are not occurring or are quickly corrected. The independent reviewer has reported that CCHPS’s services meet the contract’s requirements for access to care and quality. In addition, CCHPS has generally met the contract requirement that it implement a quality improvement program. However, in a few areas, CCHPS has not always met the contract’s requirements, such as submitting required quarterly and annual progress reports describing quality problems and actions taken to correct them. Although the independent reviewer provides important information about CCHPS’s performance, limitations in DoC’s oversight of CCHPS may hinder the agency’s ability to be assured of CCHPS’s compliance with the contract. For example, DoC has not enforced the contract requirement that CCHPS provide it with quarterly and annual progress reports. Furthermore, although DoC has authority to impose monetary damages on CCHPS if it does not meet certain requirements included in the contract, DoC has not collected data needed to impose these damages or developed formal procedures for determining whether CCHPS has met these requirements and for imposing damages if CCHPS has not met them. On the basis of his review, the independent reviewer has consistently reported that CCHPS’s medical services meet the contract’s requirements for access to care and quality. He has also reported that services meet the “required constitutional standards of care.” In addition, he told us that, in his opinion, CCHPS is one of the best correctional health care providers in the country. According to the independent reviewer, some activities, such as documenting the administering of medication, have been performed consistently over the life of the contract. Other activities have improved over time. For example, in one report, the independent reviewer noted that CCHPS’s chronic disease guidelines were outdated, but later reported that CCHPS had appropriately revised the guidelines. In addition, CCHPS generally meets the contract requirement that it implement a quality improvement program. CCHPS has used the performance assessment instruments each quarter to monitor its services, and the independent reviewer has concluded that CCHPS accurately uses these instruments to assess its medical services. For example, based on data from its quarterly analyses, CCHPS identified problems in inmates’ access to dental care. As a result, CCHPS conducted a study to identify ways to improve access to this service and eventually established a system that gave higher priority to care for inmates with more serious dental problems. CCHPS’s subsequent review found that access had improved. While CCHPS’s medical services and monitoring efforts generally meet the requirements of the contract, in a few areas CCHPS has not always met requirements. For example, the contract requires that CCHPS provide timely follow-up services to inmates with abnormal chest x-ray results. Although CCHPS has recently improved its performance, the independent reviewer had repeatedly found that CCHPS did not always provide timely follow-up services to these inmates. The independent reviewer also recently determined that CCHPS is not performing reviews of inmate deaths. This is an NCCHC requirement, and CCHPS’s quality improvement program specifies that CCHPS should conduct such reviews monthly. In addition, CCHPS has not regularly submitted the required quarterly and annual progress reports providing information on quality problems and its actions to correct them. CCHPS has never submitted quarterly reports, and submitted only one annual report. Furthermore, the annual progress report CCHPS did submit provided only limited information. For example, it did not discuss CCHPS’s lack of timely follow-up on abnormal x-ray results, although the independent reviewer had repeatedly identified this as a problem. Inmates have expressed concerns about other medical services required by the contract. Our analysis of a sample of the 369 inmate grievances submitted from April 2003 through October 2003 found that many complaints related to inmates’ ability to gain access to requested sick call and primary care services and to the timely distribution of medications. For example, some inmates complained that they had submitted multiple requests to be seen during sick call and had not yet been seen. CCHPS’s internal monitoring has also identified problems related to sick call services, such as inconsistent use of the protocols developed to guide inmate health assessments. In addition, advocacy groups with whom we spoke expressed concern about distribution of medications on weekends and to newly admitted inmates. Although the independent reviewer provides important information about CCHPS’s services, DoC has other weaknesses in its oversight of CCHPS that reduce its ability to be assured that CCHPS is complying with the contract and that problems are not occurring. DoC has never used its authority to impose monetary damages on CCHPS for failing to meet certain contract requirements. This is in part because it lacks the necessary data and a formal procedure for determining whether CCHPS has met the requirements; it also lacks a procedure for imposing damages if they are warranted. To evaluate CCHPS’s compliance with many of the requirements that are linked to monetary damages, DoC needs data that indicate the percentage of inmates for whom CCHPS provided the required service. One potential source for such data is the performance assessment instruments used by CCHPS and the independent reviewer, which measure many of the activities included in these contract requirements. However, at present, DoC neither regularly collects data itself nor requires the independent reviewer or CCHPS to submit data they collect through their quarterly analyses of services. DoC officials also were not able to provide any documents that articulated how, and how often, they would evaluate CCHPS’s compliance with the contract requirements associated with monetary damages, and DoC has not provided CCHPS with information on the status of its compliance. Furthermore, if DoC were able to determine that CCHPS was not meeting a contract requirement, it has not determined whether it would immediately impose damages on CCHPS or first give CCHPS an opportunity to correct the problem. In addition, DoC has generally not enforced the contract requirement that CCHPS submit quarterly and annual progress reports describing quality problems and actions taken to correct them. These reports would allow DoC to obtain information on how CCHPS is addressing compliance or other performance problems identified by CCHPS’s own monitoring or the independent reviewer. For example, the independent reviewer has repeatedly reported that CCHPS did not consistently screen and treat female inmates for chlamydia and gonorrhea. In addition, while CCHPS usually responds to inmate grievances in a timely way, the independent reviewer has reported on several occasions that CCHPS does not analyze grievances in a sufficiently thorough way to identify systemic problems in CCHPS’s services. Enforcing the requirement that CCHPS submit regular progress reports would better enable DoC to ensure that CCHPS promptly corrects such problems. An area where DoC has been slow to carry out its oversight responsibility relates to the contract requirement for an infection control plan. To maintain its NCCHC accreditation, CCHPS must have an infection control plan, and the April 2003 modification of the contract required that CCHPS’s plan be approved by DoC. Although CCHPS submitted an infection control plan to DoC for approval in August 2003, DoC did not complete its review and approve the plan until June 2004. In addition to having gaps in its oversight of services provided by CCHPS, DoC is not providing systematic oversight to ensure that, when CCHPS refers inmates to off-site services, inmates receive those services promptly. DoC officials believe the closure of District of Columbia General Hospital in 2001 and the shift of off-site services to Greater Southeast Community Hospital have resulted in delays in obtaining off-site care for inmates, particularly in certain specialty areas, such as orthopedics and dermatology. The independent reviewer and CCHPS have also expressed concerns about access to off-site services. CCHPS, which is responsible for arranging and monitoring off-site appointments, documented earlier delays in obtaining these appointments, but at the time of our review, it no longer possessed this documentation. Despite its concerns, DoC has not systematically documented more recent delays in obtaining off-site appointments for inmates, is not able to provide any data on the nature or length of delays, and has no plans to study this issue. From 2000 to 2003, DoC’s average daily cost of providing medical services to an inmate at the Jail decreased by almost one-third. This resulted from a decrease in the total cost of providing medical services to inmates despite an increase in the inmate population. DoC and CCHPS officials told us they controlled costs in various ways, including reducing personnel expenditures. In 2003, DoC consolidated the services provided to inmates in the Jail and the CTF under one CCHPS contract and introduced a daily per inmate pricing structure, known as per diem pricing. The total cost to provide medical services to inmates at the Jail and the CTF in 2003 was about $15.8 million, an average of $13.28 per inmate. From initiation of the CCHPS contract in 2000 to 2003, the average daily per inmate cost of medical services at the Jail decreased by almost one- third, from about $19 a day to about $13 a day. The average decrease resulted from a decline in the total cost of services, combined with a rise in the inmate population. During this period, the total cost of providing medical services at the Jail decreased from about $11.7 million to about $11.4 million, about 3 percent. (See fig. 1.) At the same time, the average daily population in the Jail increased by about 680 inmates, about 41 percent. (See fig. 2.) In fiscal year 1999, the last full year in which the Receiver directly provided medical services at the Jail, the total cost was about $12.6 million and the average per inmate cost was about $21 a day. As a result of the combination of decreased cost and increased inmate population, DoC’s average daily cost of providing medical services to an inmate at the Jail since CCHPS began providing services fell by almost one-third from 2000 to 2003. (See fig. 3.) DoC and CCHPS officials told us that they were able to reduce the total cost of providing medical services at the Jail through various means. For example, in 2003, DoC officials stopped paying CCHPS a management fee. DoC also negotiated with CCHPS officials to reduce employee salaries and fringe benefits, and CCHPS made more efficient use of its staff. For example, CCHPS was able to eliminate unnecessary testing done at intake, such as conducting repeat chest x-rays for recently returned inmates, which allowed CCHPS to increase staff time available for providing other services. In addition, CCHPS officials told us they have selectively replaced higher salaried staff with lower salaried staff; in one case they changed a vacated pharmacist position to a pharmacy technician position. CCHPS also controlled personnel expenditures by reducing the overall number of staff at the Jail, while still meeting NCCHC standards for physician staffing levels. When the contract began in March 2000, CCHPS had about 125 full-time equivalent (FTE) positions at the Jail, and there were about 18 Jail inmates for each clinical staff member. As of April 2003, CCHPS’s FTEs at the Jail had decreased to about 114, and the number of inmates for each clinical staff member had risen to about 27. NCCHC requires jails to maintain one physician on-site for 3.5 hours a week for every 100 inmates, and as of April 2003, CCHPS exceeded this standard by having one physician on-site for about 4.3 hours a week for every 100 inmates. Until April 2003, DoC established required staffing levels for CCHPS as a part of its contract, but the contract now allows CCHPS, with DoC’s approval, to adjust staffing levels in response to inmate population changes. In 2003, the total cost for medical services in the Jail and the CTF was about $15.8 million; over the course of that year 17,431 inmates were admitted to both facilities. In the same year, DoC consolidated medical services for CTF inmates into the contract for services for Jail inmates. It also introduced a daily per inmate pricing structure—known as per diem pricing—to calculate the rates paid to CCHPS. This pricing structure uses a per diem rate schedule, which is a sliding scale of prices that declines slightly as the combined inmate population increases. The schedule starts at $14.75 per inmate when the inmate population is below 2,200, and incrementally falls to $13.00 per inmate when the population exceeds 3,200. For example, if the combined population on a particular day were 2,000 inmates, the per diem rate would be $14.75 and the total cost to DoC for that day would be $29,500. According to DoC officials, the per diem rate declines when the inmate population rises to reflect economies of scale. Over the course of 2003, the per diem rate charged to DoC for services at the jail and the CTF averaged $13.28 per inmate. The per diem pricing structure has simplified DoC’s contract administration by generally eliminating the need for a reconciliation process. Prior to April 2003, the contract required that DoC and CCHPS complete quarterly reconciliations to determine the difference between CCHPS’s expected staff costs at the beginning of the contract year and CCHPS’s actual staff costs during the year. These differences resulted primarily from inmate population changes. However, as DoC and CCHPS negotiated the final amount of each reconciliation, the process became increasingly lengthy and several unresolved reconciliations accumulated. Over the first 3 years of the contract, for example, DoC completed only 4 of the 12 scheduled reconciliations. When the per diem pricing structure was implemented in 2003, all incomplete reconciliations were resolved in a final reconciliation settlement. DoC has provided a broad range of medical services to inmates at the Jail and the CTF since the receivership ended in September 2000. CCHPS’s medical services have generally met the contract’s requirements for access to care and quality, and CCHPS has demonstrated a commitment to providing inmates with the services they need by adding on-site specialty clinics to improve access and continuity of care. CCHPS also regularly and accurately monitors its services to ensure that it is providing appropriate care. However, CCHPS has not always met all contract requirements for service delivery and quality improvement activities. Although DoC has taken an important step toward ensuring the quality of services that CCHPS provides to inmates by retaining the independent reviewer, it has not taken several other actions that would help it better oversee the care that inmates receive. For example, DoC has limited its ability to hold CCHPS accountable for meeting the contract requirements that are linked to monetary damages. For monetary damages to be a viable oversight and contract enforcement mechanism, DoC would need to obtain data that demonstrate whether CCHPS is providing required services to the minimum percentage of the inmate population stipulated by the contract. However, DoC has not collected these data. DoC would also need to develop formal procedures for assessing CCHPS’s compliance with the requirements and for imposing monetary damages if they are warranted. Furthermore, DoC has not enforced the requirement that CCHPS regularly submit progress reports describing how it is correcting problems identified through performance monitoring, including any problems that may place CCHPS out of compliance with the contract. If CCHPS provided this information, DoC could ensure that CCHPS promptly took corrective action to respond to problems identified by the independent reviewer or CCHPS’s own monitoring, such as CCHPS’s failure to promptly follow up on abnormal chest x-ray results. Having the capacity to enforce the contract requirements linked with monetary damages and requiring CCHPS to submit regular progress reports would strengthen DoC’s ability to ensure that CCHPS provides important medical services to inmates. To help ensure that CCHPS provides required medical services to inmates of the District of Columbia Jail and the CTF, we recommend that the Mayor require the Director of DoC to take the following two actions: Develop formal procedures—including collection of needed data—to regularly assess whether CCHPS’s performance meets the contract requirements that are linked to monetary damages and to impose these damages. Ensure that CCHPS submits to DoC the required quarterly and annual progress reports, which should describe identified problems and the actions CCHPS has taken to correct them. We provided a draft of this report to DoC for comment. In its response DoC did not comment on our recommendations, but provided additional information about its contract with CCHPS and medical services for inmates of the Jail and the CTF. In addition, DoC elaborated on its oversight of medical services provided by CCHPS. (DoC’s comments are reprinted in app. IV.) DoC emphasized in its comments that the independent reviewer acts at the request and on behalf of the agency. We noted in the draft report that DoC’s hiring of the independent reviewer was an important step toward ensuring the quality of CCHPS’s services and described the independent reviewer’s role in DoC’s oversight of CCHPS. DoC expressed concern that the issues discussed in the independent reviewer’s reports are intended to identify opportunities for CCHPS to improve, but that the draft report portrayed them as problems or deficiencies. While some issues raised by the independent reviewer could be characterized as opportunities for service improvement, we found that others indicated performance shortfalls related to specific contract requirements. In its comments, DoC discussed our finding that CCHPS has not regularly submitted the quarterly and annual reports required by the contract; these reports are to provide DoC with information on problems identified by CCHPS’s performance monitoring or by the independent reviewer and on CCHPS’s corrective actions. DoC stated that instead of the quarterly reports, it relies on certain monthly reports and regular verbal communication. DoC’s comments describe two types of monthly reports, one providing various data on off-site services and the other relating to two performance measures reported to the Office of the Mayor. However, undocumented verbal communications and these narrowly focused monthly reports are not a substitute for the quarterly progress reports called for in the contract and do not enable DoC to ensure that CCHPS is addressing identified problems. DoC’s comments acknowledge that CCHPS has not submitted all required annual reports. We do not agree that the information provided in the December 2002 report on the reconciliation of CCHPS’s expected and actual costs, which DoC cites in its comments, provided DoC with the type of information required in the annual progress reports. For example, this report contains no information about how CCHPS planned to improve its performance in screening and treating female inmates for chlamydia and gonorrhea. DoC highlighted its role in reducing the cost of medical services provided to inmates by CCHPS. In the final report we provided additional information on DoC’s role. DoC also noted that the average daily cost of services decreased from about $19 to about $13, which we stated in our draft report, and that this will result in savings over the remaining life of the contract. However, while the average daily cost per inmate in 2003 was $13.32, under the current rate schedule, daily per inmate costs may range from $13.00 when the combined Jail and CTF population exceeds 3,200 to $14.75 when the inmate population is below 2,200. Therefore, costs over the remaining life of the contract will depend largely on the inmate population. In response to DoC’s comments, we replaced the term “financial penalties” with “monetary damages.” While the comments state that DoC has other remedies for contract nonperformance, we believe that the authority to impose monetary damages is also a useful means of ensuring CCHPS’s compliance with the contract. In its comments, DoC described changes in the District’s health care system that have affected the provision of off-site medical services for inmates. Because the focus of our report was on services provided by CCHPS through its contract with DoC, a detailed discussion of these developments was not within the scope of the report. DoC also stated that there was a past study on delays in obtaining off-site appointments for inmates and that there is no need to conduct an additional study. The draft report did not recommend that DoC conduct an additional study, but reported that DoC and the independent reviewer have identified problems with access to off-site services and that DoC has not collected data on delays. We incorporated other information provided by DoC in its comments on our draft report where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the DoC Director, interested congressional committees, and other parties. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7119. Another contact and key contributors are listed in appendix V. We examined the medical services provided by the Center for Correctional Health and Policy Studies, Inc. (CCHPS) to inmates at the Jail and the Correctional Treatment Facility (CTF), including CCHPS’s internal monitoring; the District of Columbia Department of Corrections’ (DoC) oversight of those services; CCHPS’s contract compliance; and the cost of services under the contract. To provide information on CCHPS’s and DoC’s activities, we reviewed documents and interviewed officials from those two organizations. DoC documents we reviewed included contracting documents such as the original request for proposals and subsequent modifications, reports of inmate population volume, and specialty clinic utilization statistics. In reviewing DoC’s activities, we assessed DoC’s internal controls related to the contract with CCHPS. CCHPS documents we reviewed included policies and procedures, staffing plans, annual progress reports, and quarterly performance analyses. We also interviewed the independent reviewer hired by DoC and analyzed the reviewer’s quarterly reports to examine CCHPS’s medical services and CCHPS’s quality improvement activities. In addition, we analyzed documents and interviewed officials from the National Commission on Correctional Health Care and the American Correctional Association to obtain information on their correctional health care accreditation standards, their accreditation review processes, and their findings on DoC facilities. We also reviewed our previous work on medical services at the Jail. We reviewed issues related to medical services provided to CTF inmates only since April 2003, when DoC expanded its contract with CCHPS to include medical services for inmates at that facility. To obtain information on inmate complaints about medical services the contract requires CCHPS to provide and on CCHPS’s responses to these complaints, we conducted an independent analysis of randomly selected samples of grievances submitted by inmates at the Jail and the CTF. Of the 201 grievances at the Jail and the 168 grievances at the CTF during the period April 1, 2003, through October 31, 2003, we randomly selected 75 grievances for each analysis, for a total sample size of 150. DoC was able to provide us with the detailed information needed for our analysis on 72 of the 75 grievances selected from the Jail and on 72 of the 75 grievances selected from the CTF. Grievances for which DoC could not provide the requested information were excluded from each analysis. For both the Jail and the CTF samples of inmate grievances, we analyzed the timeliness of CCHPS’s response, the subject of the grievance, and the extent to which CCHPS’s response addressed the principal areas of concerns cited in the complaint. The final sample size of 144 grievances produced estimates about types of grievances and timeliness of responses with a margin of error of plus or minus 5.0 percent at the 95-percent confidence level. Although we focused principally on medical services provided by CCHPS under its contract with DoC, we also obtained information about inmate services that are not part of the CCHPS contract—such as off-site services—by reviewing documents and interviewing officials from CCHPS, DoC, and the District of Columbia Department of Health (DoH). Documents we reviewed included contracts between DoH and community providers and utilization data on off-site services provided to inmates. We also interviewed officials from the District of Columbia Department of Mental Health, a community health care provider, and groups providing legal services to inmates. To calculate the total annual and average per inmate costs of the medical services that CCHPS provided, we reviewed documents such as DoC’s budget records, purchase order summaries, contract pricing modifications, and CCHPS invoices. We interviewed officials from the District of Columbia Office of Contracting and Procurement; DoC, including its Office of the Chief Financial Officer; and CCHPS. We also examined independently audited accounting data from the District of Columbia Office of Financial Operations and Systems. We determined that the medical services cost information we reviewed was reliable, based on documentation provided by the District of Columbia Office of Financial Operations and Systems stating that the source of the data was the System of Accounting and Reporting, the District of Columbia’s official accounting records, which is subject to an independent audit each year. We made certain assumptions to define four comparable 12-month periods that approximated the DoC-CCHPS contract year. Although there are slight differences between the time periods defined for total costs and inmate population averages, the length of each period was 1 year. Total cost data for 2000, 2001, and 2002 are from March 12 of each year through March 11 of the following year, coinciding with the DoC-CCHPS contract year, while inmate population data for 2000, 2001, and 2002 are from April 1 of each year through March 31 of the following year, approximating the DoC- CCHPS contract year. Total cost and inmate population data for 2003 are from April 1, 2003, through March 31, 2004, approximating the DoC- CCHPS contract year. We calculated the average daily inmate population for each annual period by first calculating an average daily population for each of the 12 months within the period, and then averaging the monthly averages. We applied an accrual methodology to calculate the total costs associated with each annual period. The DoC-CCHPS contract during the years 2000 through 2002 specified a fixed contract price at the beginning of each year, subject to reconciliations during the year. Reconciliations conducted during contract years often resulted in adjustments to DoC payments in a subsequent contract year. By applying an accrual method, we attributed reconciliation costs to the years from which they originated rather than the years in which they were paid. We performed our work from August 2003 through June 2004 in accordance with generally accepted government auditing standards. The contract between DoC and CCHPS contains certain requirements that CCHPS must meet. If these requirements are not met, DoC has the authority to impose specified monetary damages on CCHPS. Table 2 summarizes the requirements linked with monetary damages. In 2000, DoC, CCHPS, and the independent reviewer hired by DoC to monitor CCHPS’s medical services developed performance assessment instruments to allow them to determine how consistently CCHPS delivered required medical services to inmates and whether it conducted activities included in its quality improvement program. Table 3 describes the measures included in the performance assessment instruments, as well as the samples measured and the sources of the samples. When reviewing services, the person conducting the assessment determines whether each bulleted measure has been met. In addition to the person named above, key contributors to this report were Emily Gamble Gardiner, Marc Feuerberg, Krister Friday, and Anne Montgomery.
Since the end of a court-ordered receivership overseeing medical services at the District of Columbia Jail in September 2000, the Department of Corrections (DoC) has contracted with the Center for Correctional Health and Policy Studies, Inc. (CCHPS) to provide inmate medical services. GAO was asked to provide information on (1) the medical services DoC contracted with CCHPS to provide, including CCHPS's monitoring of its services; (2) mechanisms DoC established to oversee CCHPS's services; (3) CCHPS's contract compliance and DoC's efforts to ensure compliance; and (4) the cost of medical services. To collect this information, GAO analyzed documents and interviewed officials from District agencies, CCHPS officials, and an independent reviewer hired by DoC to monitor medical services. DoC has contracted with CCHPS to provide a broad range of medical services to inmates at the District of Columbia Jail and the Correctional Treatment Facility (CTF)--an adjacent overflow facility. Services include health screenings at intake; primary care services, including care for chronic conditions; mental health care; and specialty care. In addition, CCHPS assists DoC in helping inmates obtain services not included in the contract, such as specialty or emergency services that cannot be offered on-site. As part of the contract, CCHPS also established a quality improvement program to monitor its services. A key component of the program is a quarterly analysis of random samples of inmate medical records to measure how consistently CCHPS delivers required services. DoC established several mechanisms to oversee CCHPS's delivery of medical services to inmates. For example, DoC retained an independent reviewer to monitor the services provided by CCHPS on a quarterly basis. In addition, the contract gives DoC authority to impose monetary damages on CCHPS if it fails to meet any of 12 requirements specified in the contract, most of which relate to providing key services to a minimum percentage of inmates. The contract also requires CCHPS to submit quarterly and annual progress reports describing quality problems identified by the independent reviewer or its own monitoring and actions taken to correct them. Although available evidence indicates that CCHPS has generally complied with the terms of its contract, DoC has not exercised sufficient oversight to provide assurance that problems are not occurring or are quickly corrected. The independent reviewer has consistently found that CCHPS's services meet the contract's overall requirements for access to care and quality, but has also reported that CCHPS has not always met certain requirements. For example, while CCHPS recently improved its performance in providing timely follow-up services to inmates with abnormal chest x-ray results, the independent reviewer had repeatedly found problems in this area. DoC has not taken actions that would allow it to be assured of CCHPS's compliance with contract requirements linked to monetary damages. The agency has not collected data or developed a formal procedure to determine whether CCHPS has met the requirements, and it lacks a procedure to impose damages if warranted. Also, DoC has not regularly enforced the contract requirement that CCHPS submit quarterly and annual progress reports describing quality problems and corrective actions, and CCHPS has often not submitted these reports. From 2000 to 2003, the average daily cost of providing medical services to a Jail inmate decreased by almost one-third, from about $19 a day per inmate to about $13 a day. In 2003, DoC consolidated the services provided to inmates in the Jail and the CTF under one contract with CCHPS. In that year, during which 17,431 inmates were admitted to the Jail and the CTF, the total cost of providing medical services at both facilities was about $15.8 million.
Our October 2009 report on climate change adaptation found no coordinated national approach to adaptation, but our May 2011 report on climate change funding cited indications that federal agencies were beginning to respond to climate change more systematically. About the same time as the issuance of our October 2009 report, Executive Order 13514 on Federal Leadership in Environmental, Energy, and Economic Performance called for federal agencies to participate actively in the Interagency Climate Change Adaptation Task Force. The task force, which began meeting in Spring 2009, is co-chaired by the Council on Environmental Quality (CEQ), the National Oceanic and Atmospheric Administration (NOAA), and the Office of Science and Technology Policy (OSTP), and includes representatives from more than 20 federal agencies and executive branch offices. The task force was formed to develop federal recommendations for adapting to climate change impacts both domestically and internationally and to recommend key components to include in a national strategy. On October 14, 2010, the task force released its interagency report outlining recommendations to the President for how federal policies and programs can better prepare the United States to respond to the impacts of climate change. The report recommends that the federal government implement actions to expand and strengthen the nation’s capacity to better understand, prepare for, and respond to climate change. These recommended actions include making adaptation a standard part of agency planning to ensure that resources are invested wisely and services and operations remain effective in a changing climate. According to CEQ officials, the task force will continue to meet as an interagency forum for discussing the federal government’s adaptation approach and to support and monitor the implementation of recommended actions in the progress report. The task force is due to release another report in October 2011 that documents progress toward implementing its recommendations and provides additional recommendations for refining the federal approach to adaptation, as appropriate, according to CEQ officials. Individual agencies are also beginning to consider adaptation actions. For example, in May 2009, the Chief of Naval Operations created Task Force Climate Change to address the naval implications of a changing Arctic and global environment. The Task Force was created to make recommendations to Navy leadership regarding policy, investment, and action, and to lead public discussion. In addition, the U.S. Department of the Interior issued an order in September 2009 designed to address the impacts of climate change on the nation’s water, land, and other natural and cultural resources. Among other things, the order requires each bureau and office in the department to consider and analyze potential climate change impacts when undertaking long-range planning exercises, setting priorities for scientific research and investigations, developing multi-year management plans, and making major decisions regarding potential use of resources. In another example, according to NOAA, its Regional Integrated Sciences and Assessments (RISA) program supports climate change research to meet the needs of decision makers and policy planners at the national, regional, and local levels. In October 2009, we reported that some state and local authorities were beginning to plan for and respond to climate change impacts. We visited three U. S. sites in doing the work for that report—New York City; King County, Washington; and the state of Maryland—where state and local officials were taking such steps. We have not evaluated the progress of these initiatives since the issuance our 2009 report.  New York City: New York City’s adaptation efforts stemmed from a growing recognition of the vulnerability of the city’s infrastructure to natural disasters, such as the severe flooding in 2007 that led to widespread subway closures. At the time of our October 2009 report, New York City’s adaptation efforts typically had been implemented as facilities were upgraded or as funding became available. For example, the city’s Department of Environmental Protection (DEP), which manages water and wastewater infrastructure, had begun to address flood risks to its wastewater treatment facilities. These and other efforts are described in DEP’s 2008 Climate Change Program Assessment and Action Plan. Many of New York City’s wastewater treatment plants, such as Tallman Island, are vulnerable to sea level rise and flooding from storm surges because they are located in the floodplain next to the bodies of water into which they discharge. In response to this threat, DEP planned to, in the course of scheduled renovations, raise sensitive electrical equipment, such as pumps and motors, to higher levels to protect them from flood damage.  King County, Washington: According to officials from the King County Department of Natural Resources and Parks (DNRP), the county took steps to adapt to climate change because its leadership was highly aware of climate impacts on the county. For example, in November 2006, the county experienced severe winter storms that caused a series of levees to crack. The levees had long needed repair, but the storm damage helped increase support for the establishment of a countywide flood control zone district, funded by a dedicated property tax. The flood control zone district planned to use the funds, in part, to upgrade flood protection facilities to increase the county’s resilience to future flooding. In addition to more severe winter storms, the county expected that climate change would lead to sea level rise; reduced snowpack; and summertime extreme weather such as heat waves and drought, which can lead to power shortages because hydropower is an important source of power in the region. The University of Washington Climate Impacts Group, funded by NOAA’s RISA program, has had a long-standing relationship with county officials and worked closely with them to provide regionally specific climate change data and modeling, such as a 2009 assessment of climate impacts in Washington, as well as decision-making tools.  Maryland: Maryland officials took a number of steps to formalize their response to climate change effects. An executive order in 2007 established the Maryland Commission on Climate Change, which released the Maryland Climate Action Plan in 2008. As part of this effort, the Maryland Department of Natural Resources (DNR) chaired an Adaptation and Response Working Group, which issued a report on sea level rise and coastal storms. The 2008 Maryland Climate Action Plan calls for future adaptation strategy development to cover other sectors, such as agriculture and human health. Additionally, Maryland provided guidance to coastal counties to assist them with incorporating the effects of climate change into their planning documents. For example, DNR funded guidance documents to three coastal counties—Dorchester, Somerset, and Worcester Counties— on how to address sea level rise and other coastal hazards in their local ordinances and planning efforts. In our prior work, we found that the challenges faced by federal, state, and local officials in their efforts to adapt to climate change fell into several categories:  Focusing on immediate needs. Available attention and resources were focused on more immediate needs, making it difficult for adaptation efforts to compete for limited funds. For example, several federal, state, and local officials who responded to a questionnaire we prepared for our October 2009 report on adaptation noted how difficult it is to convince managers of the need to plan for long-term adaptation when they are responsible for more urgent concerns that have short decision-making time frames. One federal official explained that “it all comes down to resource prioritization. Election and budget cycles complicate long-term planning such as adaptation will require. Without clear top-down leadership setting this as a priority, projects with benefits beyond the budget cycle tend to get raided to pay current- year bills to deliver results in this political cycle.” Insufficient site-specific data. Without sufficient site-specific data, such as local projections of expected changes, it is hard to predict the impacts of climate change and thus hard for officials to justify the current costs of adaptation efforts for potentially less certain future benefits. This is similar to what we found in past work on climate change on federal lands. Specifically, our August 2007 report demonstrated that land managers did not have sufficient site-specific information to plan for and manage the effects of climate change on the federal resources they oversee. In particular, the managers lacked computational models for local projections of expected changes. For example, at the time of our review, officials at the Florida Keys National Marine Sanctuary said that they did not have adequate modeling and scientific information to enable managers to predict the effects of climate change on a small scale, such as that occurring within the sanctuary. Without such modeling and information, most of the managers’ options for dealing with climate change were limited to reacting to already-observed effects on their units, making it difficult to plan for future changes. Furthermore, these resource managers said that they generally lacked detailed inventories and monitoring systems to provide them with an adequate baseline understanding of the plant and animal species that existed on the resources they manage. Without such information, it is difficult to determine whether observed changes are within the normal range of variability.  Lack of clear roles and responsibilities. Adaptation efforts are constrained by a lack of clear roles and responsibilities among federal, state, and local agencies. Of particular note, about 70 percent (124 of 178) of the federal, state, and local officials who responded to a questionnaire we prepared for our October 2009 report on adaptation rated the “lack of clear roles and responsibilities for addressing adaptation across all levels of government” as very or extremely challenging. For example, according to one respondent, “there is a power struggle between agencies and levels of government…Everyone wants to take the lead rather than working together in a collaborative and cohesive way.” These challenges make it harder for officials to justify the current costs of adaptation efforts for potentially less certain future benefits. A 2009 report by the National Research Council discusses how officials are struggling to make decisions based on future climate scenarios instead of past climate conditions. According to the report, requested by the Environmental Protection Agency and NOAA, usual practices and decision rules (for building bridges, implementing zoning rules, using private motor vehicles, and so on) assume a stationary climate—a continuation of past climate conditions, including similar patterns of variation and the same probabilities of extreme events. According to the National Research Council report, that assumption, which is fundamental to the ways people and organizations make their choices, is no longer valid; Climate change will create a novel and dynamic decision environment. We reached similar conclusions in a March 2007 report that highlighted how historical information may no longer be a reliable guide for decision making. We reported on the Federal Emergency Management Agency’s (FEMA) National Flood Insurance Program, which insures properties against flooding, and the U.S. Department of Agriculture’s (USDA) Federal Crop Insurance Corporation, which insures crops against drought or other weather disasters. Among other things, the report contrasted the experience of private and public insurers. We found that many major private insurers were proactively incorporating some near-term elements of climate change into their risk management practices. In addition, other private insurers were approaching climate change at a strategic level by publishing reports outlining the potential industry-wide impacts and strategies to proactively address the issue. In contrast, we noted that the agencies responsible for the nation’s two key federal insurance programs had done little to develop the kind of information needed to understand their programs’ long-term exposure to climate change for a variety of reasons. As a FEMA official explained, the National Flood Insurance Program is designed to assess and insure against current—not future—risks. Unlike the private sector, neither this program nor the Federal Crop Insurance Corporation had analyzed the potential impacts of an increase in the frequency or severity of weather-related events on their operations over the near- or long-term. The proactive view of private insurers in our 2007 report was echoed on March 17, 2009, by the National Association of Insurance Commissioners, which adopted a mandatory requirement that insurance companies disclose to regulators the financial risks they face from climate change, as well as actions the companies are taking to respond to those risks. We have not studied the progress of these specific programs in managing the nation’s long-term exposure to climate change since the issuance of our 2007 report. Based on information obtained from studies, visits to sites pursuing adaptation efforts, and responses to a Web-based questionnaire sent to federal, state, and local officials knowledgeable about adaptation, our October 2009 report identified three categories of potential federal actions for addressing challenges to adaptation efforts:  First, training and education efforts could increase awareness among government officials and the public about the impacts of climate change and available adaptation strategies. A variety of programs are trying to accomplish this goal, such as the Chesapeake Bay National Estuarine Research Reserve (partially funded by NOAA), which provides education and training on climate change to the public and local officials in Maryland.  Second, actions to provide and interpret site-specific information could help officials understand the impacts of climate change at a scale that would enable them to respond. About 80 percent of the respondents to our Web-based questionnaire rated the “development of state and local climate change impact and vulnerability assessments” as very or extremely useful.  Third, Congress and federal agencies could encourage adaptation by clarifying roles and responsibilities. About 71 percent of the respondents to our Web-based questionnaire rated the development of a national adaptation strategy as very or extremely useful. Furthermore, officials we spoke with and officials who responded to our questionnaire said that a coordinated federal response would also demonstrate a federal commitment to adaptation. Importantly, our October 2009 report recommended that within the Executive Office of the President the appropriate entities, such as CEQ, develop a national adaptation plan that includes setting priorities for federal, state, and local agencies. CEQ generally agreed with our recommendation. Some of our other recent climate change-related reports offer additional examples of the types of actions federal agencies and the Congress could take to assist states and communities in their efforts to adapt. Our August 2007 report, for example, recommended that certain agencies develop guidance advising managers on how to address the effects of climate change on the resources they manage. Furthermore, our May 2008 report on the economics of policy options to address climate change identified actions Congress and federal agencies could take, such as reforming insurance subsidy programs in areas vulnerable to hurricanes or flooding. Our May 2011 report on federal climate change funding found that (1) agencies do not consistently interpret methods for defining and reporting the funding of climate change activities, (2) key factors complicate efforts to align such funding with strategic priorities, and (3) options are available to better align federal funding with strategic priorities, including governmentwide strategic planning. Any effective federal climate change adaptation strategy will need to ensure that federal funds are properly tracked and that funding decisions are aligned with strategic priorities. Given the interdisciplinary nature of the issue, such alignment is a challenge as formidable as it is necessary to address. In our report, we identified three methods for defining and reporting climate change funding, foremost of which is guidance contained in OMB’s Circular A-11. The circular directs agencies to report funding that meet certain criteria in three broad categories—research, technology, and international assistance. According to OMB staff, Circular A-11 is the primary method for defining and reporting long-standing “cross-cuts” of funding for climate change activities. Interagency groups, such as USGCRP have collaborated in the past with OMB to clarify the definitions in Circular A-11, according to comments from CEQ, OMB, and OSTP. Our work suggests that existing methods for defining and reporting climate change funding are not consistently interpreted and applied across the federal government. Specifically, for our May 2011 report, we sent a Web-based questionnaire to key federal officials involved in defining and reporting climate change funding, developing strategic priorities, or aligning funding with strategic priorities. Most of these respondents indicated that their agencies consistently applied methods for defining and reporting climate change funding. Far fewer respondents indicated that methods for defining and reporting climate change funding were applied consistently across the federal government. Some respondents, for example, noted that other agencies use their own interpretation of definitions, resulting in inconsistent accounting across the government. Respondents generally identified key reasons agencies may interpret and apply existing methods differently, including difficulty determining which programs are related to climate change. In comments to our May 2011 report, CEQ, OMB, and OSTP noted that consistency likely varies by method of reporting, with Circular A-11 being the most consistent and other methods being less so. In addition, our work identified two key factors that complicate efforts to align federal climate change funding with strategic priorities across the federal government. First, federal officials lack a shared understanding of priorities, partly due to the multiple, often inconsistent messages articulated in different sources, such as strategic plans. Our review of these sources found that there is not currently a consolidated set of strategic priorities that integrates climate change programs and activities across the federal government. As we stated in our May 2011 report, in the absence of clear, overarching priorities, federal officials are left with many different sources that present climate change priorities in a more fragmented way. The multiple sources for communicating priorities across the climate change enterprise may result in conflicting messages and confusion. The second key factor that complicates efforts to align federal funding with priorities is that existing mechanisms intended to do so are nonbinding, according to respondents, available literature, and stakeholders. For example, some respondents noted that the interagency policy process does not control agency budgets and that agencies with their own budget authority may pay little attention to federal strategic priorities. In other words, federal strategic priorities set through an interagency process may not be reflected in budget decisions for individual agencies. As OSTP officials acknowledged to us, “The major challenge is the need to connect climate science programs with broader inter- and intra-agency climate efforts.” In comments to our report, OSTP stated that while significant progress is being made in linking the climate science-related efforts, individual agencies still want to advance initiatives that promote or serve their agency missions. This, according to OSTP, yields a broader challenge of tying climate-related efforts (science, mitigation, and adaptation) together into a coherent governmentwide strategy. Our May 2011 report identified several ways to better align federal climate change funding with strategic priorities, including: (1) options to improve the tracking and reporting of climate change funding, (2) options to enhance how strategic climate change priorities are set, (3) the establishment of formal coordination mechanisms, and (4) continuing efforts to link related climate change activities across the federal government. Specific options are discussed in detail in our May 2011 report and include a governmentwide strategic planning process that promotes a shared understanding among agencies of strategic priorities by articulating what they are expected to do within the overall federal response to climate change. Also discussed in detail is an integrated budget review process that better aligns these priorities with funding decisions through a more consistent method of reporting and reviewing climate change funding. Federal entities are beginning to implement some of these options. For example, there has been some recent progress on linking related federal climate change programs, according to OSTP. Specifically, OSTP stated that the science portion of the CEQ, NOAA, and OSTP-led Climate Change Adaptation Task Force is being integrated within USGCRP. OSTP also stated that it is working to create an interagency body that will bring together agencies that provide climate services to allow for better links between climate services and other federal climate-related activities. To further improve the coordination and effectiveness of federal climate change programs and activities, we recommended in our May 2011 report that the appropriate entities within the Executive Office of the President, in consultation with Congress, clearly establish federal strategic climate change priorities and assess the effectiveness of current practices for defining and reporting related funding. Chairman Durbin, Ranking Member Moran, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have. For further information about this testimony, please contact David Trimble at (202) 512-3841 or trimbled@gao.gov. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Steve Elstein, Cindy Gilbert, Ben Shouse, Jeanette Soares, Kiki Theodoropoulos, and J. Dean Thompson also made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A 2009 assessment by the United States Global Change Research Program (USGCRP) found that many types of extreme weather events, such as heat waves and regional droughts, have become more frequent and intense during the past 40 to 50 years. According to the assessment, changes in extreme weather and climate events will affect many aspects of society and the natural environment, such as infrastructure. In addition, the Department of Defense found that climate change may act as an accelerant of instability or conflict, placing a burden to respond on militaries around the world. According to the National Academies, USGCRP, and others, greenhouse gases already in the atmosphere will continue altering the climate system into the future regardless of emissions control efforts. Therefore, adaptation--defined as adjustments to natural or human systems in response to actual or expected climate change--is an important part of the response to climate change. This testimony addresses (1) the actions federal, state, and local authorities are taking to adapt to climate change; (2) the challenges that federal, state, and local officials face in their efforts to adapt and actions federal agencies could take to help address these challenges; and (3) the extent to which federal funding for adaptation and other climate change activities is consistently tracked and reported and aligned with strategic priorities. The information in this testimony is based on prior work, largely on GAO's recent reports on climate change adaptation and federal climate change funding. Federal, state, and local authorities are beginning to take steps to adapt to climate change. Federal agencies are beginning to respond to climate change systematically through an Interagency Climate Change Adaptation Task Force formed to recommend key components for inclusion in a national adaptation strategy. Individual agencies are also beginning to consider adaptation actions. For example, in May 2009, the Chief of Naval Operations created Task Force Climate Change to address the naval implications of a changing Arctic and global environment. Some state and local government authorities were beginning to plan for and respond to climate change impacts, GAO reported in 2009. For example, the state of Maryland had a strategy for reducing vulnerability to climate change, which focused on protecting habitat and infrastructure from future risks associated with sea level rise and coastal storms. In another example, King County, Washington, established a countywide flood control zone district to upgrade flood protection facilities and increase the county's resilience to future flooding, among other things. Federal, state, and local officials face numerous challenges in their efforts to adapt to climate change, and further federal action could help them make more informed decisions. These challenges include a focus of available attention and resources on more immediate needs and insufficient site-specific data--such as local projections of expected climate changes. The lack of such data makes it hard to understand the impacts of climate change and thus hard for officials to justify the cost of adaptation efforts, since future benefits are potentially less certain than current costs. GAO's October 2009 report identified potential federal actions for improving adaptation efforts, including actions to provide and interpret site-specific information, which could help officials understand the impacts of climate change at a scale that would enable them to respond. In a May 2008 report on the economics of policy options to address climate change, GAO identified actions Congress and federal agencies could take, such as reforming insurance subsidy programs in areas vulnerable to hurricanes or flooding. Funding for adaptation and other federal climate change activities could be better tracked, reported, and aligned with strategic priorities. GAO's report on federal climate change funding suggests that methods for defining and reporting such funding are not consistently interpreted and applied across the federal government. GAO also identified two key factors that complicate efforts to align funding with priorities. First, officials across a broad range of federal agencies lack a shared understanding of priorities, partly due to the multiple, often inconsistent messages articulated in different policy documents, such as strategic plans. Second, existing mechanisms intended to align funding with governmentwide priorities are nonbinding and limited when in conflict with agencies' own priorities. Federal officials who responded to a Web-based questionnaire, available literature, and stakeholders involved in climate change funding identified several ways to better align federal climate change funding with strategic priorities. These include a governmentwide strategic planning process that promotes a shared understanding among agencies of strategic priorities by articulating what they are expected to do within the overall federal response to climate change.
A domestic bioterrorist attack is considered to be a low-probability event, in part because of the various difficulties involved in successfully delivering biological agents to achieve large-scale casualties. However, a number of cases involving biological agents, including at least one completed bioterrorist act and numerous threats and hoaxes, have occurred domestically. In 1984, a group intentionally contaminated salad bars in restaurants in Oregon with salmonella bacteria. Although no one died, 751 people were diagnosed with foodborne illness. Some experts predict that more domestic bioterrorist attacks are likely to occur. The burden of responding to such an attack would fall initially on personnel in state and local emergency response agencies. These “first responders” include firefighters, emergency medical service personnel, law enforcement officers, public health officials, health care workers (including doctors, nurses, and other medical professionals), and public works personnel. If the emergency were to require federal disaster assistance, federal departments and agencies would respond according to responsibilities outlined in the Federal Response Plan. Several groups, including the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction (known as the Gilmore Panel), have assessed the capabilities at the federal, state, and local levels to respond to a domestic terrorist incident involving a weapon of mass destruction (WMD), that is, a chemical, biological, radiological, or nuclear agent or weapon. While many aspects of an effective response to bioterrorism are the same as those for any disaster, there are some unique features. For example, if a biological agent is released covertly, it may not be recognized for a week or more because symptoms may not appear for several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These differences require a type of response that is unique to bioterrorism, including infectious disease surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics to large segments of the population to prevent the spread of an infectious disease. However, some aspects of an effective response to bioterrorism are also important in responding to any type of large-scale disaster, such as providing emergency medical services, continuing health care services delivery, and managing mass fatalities. Federal spending on domestic preparedness for terrorist attacks involving WMDs has risen 310 percent since fiscal year 1998, to approximately $1.7 billion in fiscal year 2001, and may increase significantly after the events of September 11, 2001. However, only a portion of these funds were used to conduct a variety of activities related to research on and preparedness for the public health and medical consequences of a bioterrorist attack. We cannot measure the total investment in such activities because departments and agencies provided funding information in various forms—as appropriations, obligations, or expenditures. Because the funding information provided is not equivalent, we summarized funding by department or agency, but not across the federal government (see apps. I and II). Reported funding generally shows increases from fiscal year 1998 to fiscal year 2001. Several agencies received little or no funding in fiscal year 1998. For example, within the Department of Health and Human Services (HHS), the Centers for Disease Control and Prevention’s (CDC) Bioterrorism Preparedness and Response Program was established and first received funding in fiscal year 1999 (see app. I and app. II). Its funding has increased from approximately $121 million at that time to approximately $194 million in fiscal year 2001. Research is currently being done to enable the rapid identification of biological agents in a variety of settings; develop new or improved vaccines, antibiotics, and antivirals to improve treatment and vaccination for infectious diseases caused by biological agents; and develop and test emergency response equipment such as respiratory and other personal protective equipment. Appendix I provides information on the total reported funding for all the departments and agencies carrying out research, along with examples of this research. The Department of Agriculture (USDA), Department of Defense (DOD), Department of Energy, HHS, Department of Justice (DOJ), Department of the Treasury, and the Environmental Protection Agency (EPA) have all sponsored or conducted projects to improve the detection and characterization of biological agents in a variety of different settings, from water to clinical samples (such as blood). For example, EPA is sponsoring research to improve its ability to detect biological agents in the water supply. Some of these projects, such as those conducted or sponsored by DOD and DOJ, are not primarily for the public health and medical consequences of a bioterrorist attack against the civilian population, but could eventually benefit research for those purposes. Departments and agencies are also conducting or sponsoring studies to improve treatment and vaccination for diseases caused by biological agents. For example, HHS’ projects include basic research sponsored by the National Institutes of Health to develop drugs and diagnostics and applied research sponsored by the Agency for Healthcare Research and Quality to improve health care delivery systems by studying the use of information systems and decision support systems to enhance preparedness for the delivery of medical care in an emergency. In addition, several agencies, including the Department of Commerce’s National Institute of Standards and Technology and DOJ’s National Institute of Justice are conducting research that focuses on developing performance standards and methods for testing the performance of emergency response equipment, such as respirators and personal protective equipment. Federal departments’ and agencies’ preparedness efforts have included efforts to increase federal, state, and local response capabilities, develop response teams of medical professionals, increase availability of medical treatments, participate in and sponsor terrorism response exercises, plan to aid victims, and provide support during special events such as presidential inaugurations, major political party conventions, and the Superbowl. Appendix II contains information on total reported funding for all the departments and agencies with bioterrorism preparedness activities, along with examples of these activities. Several federal departments and agencies, such as the Federal Emergency Management Agency (FEMA) and CDC, have programs to increase the ability of state and local authorities to successfully respond to an emergency, including a bioterrorist attack. These departments and agencies contribute to state and local jurisdictions by helping them pay for equipment and develop emergency response plans, providing technical assistance, increasing communications capabilities, and conducting training courses. Federal departments and agencies have also been increasing their own capacity to identify and deal with a bioterrorist incident. For example, CDC, USDA, and the Food and Drug Administration (FDA) are improving surveillance methods for detecting disease outbreaks in humans and animals. They have also established laboratory response networks to maintain state-of-the-art capabilities for biological agent identification and the characterization of human clinical samples. Some federal departments and agencies have developed teams to directly respond to terrorist events and other emergencies. For example, HHS’ Office of Emergency Preparedness (OEP) created Disaster Medical Assistance Teams to provide medical treatment and assistance in the event of an emergency. Four of these teams, known as National Medical Response Teams, are specially trained and equipped to provide medical care to victims of WMD events, such as bioterrorist attacks. Several agencies are involved in increasing the availability of medical supplies that could be used in an emergency, including a bioterrorist attack. CDC’s National Pharmaceutical Stockpile contains pharmaceuticals, antidotes, and medical supplies that can be delivered anywhere in the United States within 12 hours of the decision to deploy. The stockpile was deployed for the first time on September 11, 2001, in response to the terrorist attacks on New York City. Federally initiated bioterrorism response exercises have been conducted across the country. For example, in May 2000, many departments and agencies took part in the Top Officials 2000 exercise (TOPOFF 2000) in Denver, Colorado, which featured the simulated release of a biological agent. Participants included local fire departments, police, hospitals, the Colorado Department of Public Health and the Environment, the Colorado Office of Emergency Management, the Colorado National Guard, the American Red Cross, the Salvation Army, HHS, DOD, FEMA, the Federal Bureau of Investigation (FBI), and EPA. Several agencies also provide assistance to victims of terrorism. FEMA can provide supplemental funds to state and local mental health agencies for crisis counseling to eligible survivors of presidentially declared emergencies. In the aftermath of the recent terrorist attacks, HHS released $1 million in funding to New York State to support mental health services and strategic planning for comprehensive and long-term support to address the mental health needs of the community. DOJ’s Office of Justice Programs (OJP) also manages a program that provides funds for victims of terrorist attacks that can be used to provide a variety of services, including mental health treatment and financial assistance to attend related criminal proceedings. Federal departments and agencies also provide support at special events to improve response in case of an emergency. For example, CDC has deployed a system to provide increased surveillance and epidemiological capacity before, during, and after special events. Besides improving emergency response at the events, participation by departments and agencies gives them valuable experience working together to develop and practice plans to combat terrorism. Federal departments and agencies are using a variety of interagency plans, work groups, and agreements to coordinate their activities to combat terrorism. However, we found evidence that coordination remains fragmented. For example, several different agencies are responsible for various coordination functions, which limits accountability and hinders unity of effort; several key agencies have not been included in bioterrorism-related policy and response planning; and the programs that agencies have developed to provide assistance to state and local governments are similar and potentially duplicative. The President recently took steps to improve oversight and coordination, including the creation of the Office of Homeland Security. Over 40 federal departments and agencies have some role in combating terrorism, and coordinating their activities is a significant challenge. We identified over 20 departments and agencies as having a role in preparing for or responding to the public health and medical consequences of a bioterrorist attack. Appendix III, which is based on the framework given in the Terrorism Incident Annex of the Federal Response Plan, shows a sample of the coordination efforts by federal departments and agencies with responsibilities for the public health and medical consequences of a bioterrorist attack, as they existed prior to the recent creation of the Office of Homeland Security. This figure illustrates the complex relationships among the many federal departments and agencies involved. Departments and agencies use several approaches to coordinate their activities on terrorism, including interagency response plans, work groups, and formal agreements. Interagency plans for responding to a terrorist incident help outline agency responsibilities and identify resources that could be used during a response. For example, the Federal Response Plan provides a broad framework for coordinating the delivery of federal disaster assistance to state and local governments when an emergency overwhelms their ability to respond effectively. The Federal Response Plan also designates primary and supporting federal agencies for a variety of emergency support operations. For example, HHS is the primary agency for coordinating federal assistance in response to public health and medical care needs in an emergency. HHS could receive support from other agencies and organizations, such as DOD, USDA, and FEMA, to assist state and local jurisdictions. Interagency work groups are being used to minimize duplication of funding and effort in federal activities to combat terrorism. For example, the Technical Support Working Group is chartered to coordinate interagency research and development requirements across the federal government in order to prevent duplication of effort between agencies. The Technical Support Working Group, among other projects, helped to identify research needs and fund a project to detect biological agents in food that can be used by both DOD and USDA. Formal agreements between departments and agencies are being used to share resources and knowledge. For example, CDC contracts with the Department of Veterans Affairs (VA) to purchase drugs and medical supplies for the National Pharmaceutical Stockpile because of VA’s purchasing power and ability to negotiate large discounts. Overall coordination of federal programs to combat terrorism is fragmented. For example, several agencies have coordination functions, including DOJ, the FBI, FEMA, and the Office of Management and Budget. Officials from a number of the agencies that combat terrorism told us that the coordination roles of these various agencies are not always clear and sometimes overlap, leading to a fragmented approach. We have found that the overall coordination of federal research and development efforts to combat terrorism is still limited by several factors, including the compartmentalization or security classification of some research efforts.The Gilmore Panel also concluded that the current coordination structure does not provide for the requisite authority or accountability to impose the discipline necessary among the federal agencies involved. The multiplicity of federal assistance programs requires focus and attention to minimize redundancy of effort. Table 1 shows some of the federal programs providing assistance to state and local governments for emergency planning that would be relevant to responding to a bioterrorist attack. While the programs vary somewhat in their target audiences, the potential redundancy of these federal efforts highlights the need for scrutiny. In our report on combating terrorism, issued on September 20, 2001, we recommended that the President, working closely with the Congress, consolidate some of the activities of DOJ’s OJP under FEMA. We have also recommended that the federal government conduct multidisciplinary and analytically sound threat and risk assessments to define and prioritize requirements and properly focus programs and investments in combating terrorism. Such assessments would be useful in addressing the fragmentation that is evident in the different threat lists of biological agents developed by federal departments and agencies. Understanding which biological agents are considered most likely to be used in an act of domestic terrorism is necessary to focus the investment in new technologies, equipment, training, and planning. Several different agencies have or are in the process of developing biological agent threat lists, which differ based on the agencies’ focus. For example, CDC collaborated with law enforcement, intelligence, and defense agencies to develop a critical agent list that focuses on the biological agents that would have the greatest impact on public health. The FBI, the National Institute of Justice, and the Technical Support Working Group are completing a report that lists biological agents that may be more likely to be used by a terrorist group working in the United States that is not sponsored by a foreign government. In addition, an official at USDA’s Animal and Plant Health Inspection Service told us that it uses two lists of agents of concern for a potential bioterrorist attack. These lists of agents, only some of which are capable of making both animals and humans sick, were developed through an international process. According to agency officials, separate threat lists are appropriate because of the different focuses of these agencies. In our view, the existence of competing lists makes the assignment of priorities difficult for state and local officials. Fragmentation is also apparent in the composition of groups of federal agencies involved in bioterrorism-related planning and policy. Officials at the Department of Transportation (DOT) told us that that even though the nation’s transportation centers account for a significant percentage of the nation’s potential terrorist targets, the department was not part of the founding group of agencies that worked on bioterrorism issues and has not been included in bioterrorism response plans. DOT officials also told us that the department is supposed to deliver supplies for FEMA under the Federal Response Plan, but it was not brought into the planning early enough to understand the extent of its responsibilities in the transportation process. The department learned what its responsibilities would be during the TOPOFF 2000 exercise, which simulated a release of a biological agent. In May 2001, the President asked the Vice President to oversee the development of a coordinated national effort dealing with WMDs. At the same time, the President asked the Director of FEMA to establish an Office of National Preparedness to implement the results of the Vice President’s effort that relate to programs within federal agencies that address consequence management resulting from the use of WMDs. The purpose of this effort is to better focus policies and ensure that programs and activities are fully coordinated in support of building the needed preparedness and response capabilities. In addition, on September 20, 2001, the President announced the creation of the Office of Homeland Security to lead, oversee, and coordinate a comprehensive national strategy to protect the country from terrorism and respond to any attacks that may occur. These actions represent potentially significant steps toward improved coordination of federal activities. Our recent report highlighted a number of important characteristics and responsibilities necessary for a single focal point, such as the proposed Office of Homeland Security, to improve coordination and accountability. Nonprofit research organizations, congressionally chartered advisory panels, government documents, and articles in peer-reviewed literature have identified concerns about the preparedness of states and local areas to respond to a bioterrorist attack. These concerns include insufficient state and local planning for response to terrorist events, a lack of hospital participation in training on terrorism and emergency response planning, questions regarding the timely availability of medical teams and resources in an emergency, and inadequacies in the public health infrastructure. In our view, there are weaknesses in three key areas of the public health infrastructure: training of health care providers, communication among responsible parties, and capacity of laboratories and hospitals, including the ability to treat mass casualties. Questions exist regarding how effectively federal programs have prepared state and local governments to respond to terrorism. All 50 states and approximately 255 local jurisdictions have received or are scheduled to receive at least some federal assistance, including training and equipment grants, to help them prepare for a terrorist WMD incident. In 1997, FEMA identified planning and equipment for response to nuclear, biological, and chemical incidents as areas in need of significant improvement at the state level. However, an October 2000 research report concluded that even those cities receiving federal aid are still not adequately prepared to respond to a bioterrorist attack. Inadequate training and planning for bioterrorism response by hospitals is a major problem. The Gilmore Panel concluded that the level of expertise in recognizing and dealing with a terrorist attack involving a biological or chemical agent is problematic in many hospitals. A recent research report concluded that hospitals need to improve their preparedness for mass casualty incidents. Local officials told us that it has been difficult to get hospitals and medical personnel to participate in local training, planning, and exercises to improve their preparedness. Local officials are also concerned about whether the federal government could quickly deliver enough medical teams and resources to help after a biological attack. Agency officials say that federal response teams, such as Disaster Medical Assistance Teams, could be on site within 12 to 24 hours. However, local officials who have deployed with such teams say that the federal assistance probably would not arrive for 24 to 72 hours. Local officials also told us that they were concerned about the time and resources required to prepare and distribute drugs from the National Pharmaceutical Stockpile during an emergency. Partially in response to these concerns, CDC has developed training for state and local officials in using the stockpile and will deploy a small staff with the supplies to assist the local jurisdiction with distribution. Components of the nation’s public health system are also not well prepared to detect or respond to a bioterrorist attack. In particular, weaknesses exist in the key areas of training, communication, and hospital and laboratory capacity. It has been reported that physicians and nurses in emergency rooms and private offices, who will most likely be the first health care workers to see patients following a bioterrorist attack, lack the needed training to ensure their ability to make observations of unusual symptoms and patterns. Most physicians and nurses have never seen cases of certain diseases, such as smallpox or plague, and some biological agents initially produce symptoms that can be easily confused with influenza or other, less virulent illnesses, leading to a delay in diagnosis or identification. Medical laboratory personnel require training because they also lack experience in identifying biological agents such as anthrax. Because it could take days to weeks to identify the pathogen used in a biological attack, good channels of communication among the parties involved in the response are essential to ensure that the response proceeds as rapidly as possible. Physicians will need to report their observations to the infectious disease surveillance system. Once the disease outbreak has been recognized, local health departments will need to collaborate closely with personnel across a variety of agencies to bring in the needed expertise and resources. They will need to obtain the information necessary to conduct epidemiological investigations to establish the likely site and time of exposure, the size and location of the exposed population, and the prospects for secondary transmission. However, past experiences with infectious disease response have revealed a lack of sufficient and secure channels for sharing information. Our report last year on the initial West Nile virus outbreak in New York City found that as the public health investigation grew, lines of communication were often unclear, and efforts to keep everyone informed were awkward, such as conference calls that lasted for hours and involved dozens of people. Adequate laboratory and hospital capacity is also a concern. Reductions in public health laboratory staffing and training have affected the ability of state and local authorities to identify biological agents. Even the initial West Nile virus outbreak in 1999, which was relatively small and occurred in an area with one of the nation’s largest local public health agencies, taxed the federal, state, and local laboratory resources. Both the New York State and the CDC laboratories were inundated with requests for tests, and the CDC laboratory handled the bulk of the testing because of the limited capacity at the New York laboratories. Officials indicated that the CDC laboratory would have been unable to respond to another outbreak, had one occurred at the same time. In fiscal year 2000, CDC awarded approximately $11 million to 48 states and four major urban health departments to improve and upgrade their surveillance and epidemiological capabilities. With regard to hospitals, several federal and local officials reported that there is little excess capacity in the health care system in most communities for accepting and treating mass casualty patients. Research reports have concluded that the patient load of a regular influenza season in the late 1990s overtaxed primary care facilities and that emergency rooms in major metropolitan areas are routinely filled and unable to accept patients in need of urgent care. We found that federal departments and agencies are participating in a variety of research and preparedness activities that are important steps in improving our readiness. Although federal departments and agencies have engaged in a number of efforts to coordinate these activities on a formal and informal basis, we found that coordination between departments and agencies is fragmented. In addition, we remain concerned about weaknesses in public health preparedness at the state and local levels, a lack of hospital participation in training on terrorism and emergency response planning, the timely availability of medical teams and resources in an emergency, and, in particular, inadequacies in the public health infrastructure. The latter include weaknesses in the training of health care providers, communication among responsible parties, and capacity of laboratories and hospitals, including the ability to treat mass casualties. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-7118. Barbara Chapman, Robert Copeland, Marcia Crosse, Greg Ferrante, Deborah Miller, and Roseanne Price also made key contributions to this statement. We identified the following federal departments and agencies as having responsibilities related to the public health and medical consequences of a bioterrorist attack: USDA – U.S. Department of Agriculture APHIS – Animal and Plant Health Inspection Service ARS – Agricultural Research Service FSIS – Food Safety Inspection Service OCPM – Office of Crisis Planning and Management DOC – Department of Commerce NIST – National Institute of Standards and Technology DOD – Department of Defense DARPA – Defense Advanced Research Projects Agency JTFCS – Joint Task Force for Civil Support National Guard U.S. Army DOE – Department of Energy HHS – Department of Health and Human Services AHRQ – Agency for Healthcare Research and Quality CDC – Centers for Disease Control and Prevention FDA – Food and Drug Administration NIH – National Institutes of Health OEP – Office of Emergency Preparedness DOJ – Department of Justice FBI – Federal Bureau of Investigation OJP – Office of Justice Programs DOT – Department of Transportation USCG – U.S. Coast Guard Treasury – Department of the Treasury USSS – U.S. Secret Service VA – Department of Veterans Affairs EPA – Environmental Protection Agency FEMA – Federal Emergency Management Agency Figure 1, which is based on the framework given in the Terrorism Incident Annex of the Federal Response Plan, shows a sample of the coordination activities by these federal departments and agencies, as they existed prior to the recent creation of the Office of Homeland Security. This figure illustrates the complex relationships among the many federal departments and agencies involved. The following coordination activities are represented on the figure: OMB Oversight of Terrorism Funding. The Office of Management and Budget established a reporting system on the budgeting and expenditure of funds to combat terrorism, with goals to reduce overlap and improve coordination as part of the annual budget cycle. Federal Response Plan – Health and Medical Services Annex. This annex to the Federal Response Plan states that HHS is the primary agency for coordinating federal assistance to supplement state and local resources in response to public health and medical care needs in an emergency, including a bioterrorist attack. Informal Working Group – Equipment Request Review. This group meets as necessary to review equipment requests of state and local jurisdictions to ensure that duplicative funding is not being given for the same activities. Agreement on Tracking Diseases in Animals That Can Be Transmitted to Humans. This group is negotiating an agreement to share information and expertise on tracking diseases that can be transmitted from animals to people and could be used in a bioterrorist attack. National Medical Response Team Caches. These caches form a stockpile of drugs for OEP’s National Medical Response Teams. Domestic Preparedness Program. This program was formed in response to the National Defense Authorization Act of Fiscal Year 1997 (P.L. 104-201) and required DOD to enhance the capability of federal, state, and local emergency responders regarding terrorist incidents involving WMDs and high-yield explosives. As of October 1, 2000, DOD and DOJ share responsibilities under this program. Office of National Preparedness – Consequence Management of WMD Attack. In May 2001, the President asked the Director of FEMA to establish this office to coordinate activities of the listed agencies that address consequence management resulting from the use of WMDs. Food Safety Surveillance Systems. These systems are FoodNet and PulseNet, two surveillance systems for identifying and characterizing contaminated food. National Disaster Medical System. This system, a partnership between federal agencies, state and local governments, and the private sector, is intended to ensure that resources are available to provide medical services following a disaster that overwhelms the local health care resources. Collaborative Funding of Smallpox Research. These agencies conduct research on vaccines for smallpox. National Pharmaceutical Stockpile Program. This program maintains repositories of life-saving pharmaceuticals, antidotes, and medical supplies that can be delivered to the site of a biological (or other) attack. National Response Teams. The teams constitute a national planning, policy, and coordinating body to provide guidance before and assistance during an incident. Interagency Group for Equipment Standards. This group develops and maintains a standardized equipment list of essential items for responding to a terrorist WMD attack. (The complete name for this group is the Interagency Board for Equipment Standardization and Interoperability.) Force Packages Response Team. This is a grouping of military units that are designated to respond to an incident. Cooperative Work on Rapid Detection of Biological Agents in Animals, Plants, and Food. This cooperative group is developing a system to improve on-site rapid detection of biological agents in animals, plants, and food. Bioterrorism: Coordination and Preparedness (GAO-02-129T, Oct. 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01-915, Sept. 28, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-666T, May 1, 2001). Combating Terrorism: Observations on Options to Improve the FederalResponse (GAO-01-660T, Apr. 24, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, Mar. 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01-14, Nov. 30, 2000). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, Sept. 11, 2000). Combating Terrorism: Linking Threats to Strategies and Resources (GAO/T-NSIAD-00-218, July 26, 2000). Chemical and Biological Defense: Observations on Nonmedical Chemical and Biological R&D Programs (GAO/T-NSIAD-00-130, Mar. 22, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, Mar. 21, 2000).
Federal research and preparedness activities related to bioterrorism center on detecting of such agents; developing new or improved vaccines, antibiotics, and antivirals; and developing performance standards for emergency response equipment. Preparedness activities include: (1) increasing federal, state, and local response capabilities; (2) developing response teams; (3) increasing the availability of medical treatments; (4) participating in and sponsoring exercises; (5) aiding victims; and (6) providing support at special events, such as presidential inaugurations and Olympic games. To coordinate their activities, federal agencies are developing interagency response plans, participating in various interagency work groups, and entering into formal agreements with each other to share resources and capabilities. However, GAO found that coordination of federal terrorism research, preparedness, and response programs is fragmented, raising concerns about the ability of states and localities to respond to a bioterrorist attack. These concerns include poor state and local planning and the lack of hospital participation in training on terrorism and emergency response planning. This report summarized a September 2001 report (GAO-01-915).
DOD operates one of the largest and most complex health care systems in the nation and has a dual health care mission—readiness and benefits. DOD’s health care system is referred to as the Military Health System. The readiness mission provides medical services and support to the armed forces during military operations and involves deploying medical personnel and equipment as needed to support military forces throughout the world. The benefits mission provides medical services and support to members of the armed forces, their family members, and others eligible for DOD health care, such as retired service members and their families. DOD’s health care mission is carried out through military hospitals and clinics throughout the United States and overseas, commonly referred to as military treatment facilities, as well as through civilian health care providers. Military treatment facilities comprise DOD’s direct care system for providing health care to beneficiaries. The Assistant Secretary of Defense (Health Affairs) is responsible for ensuring the effective execution of DOD’s health care mission and exercises authority, direction, and control over medical personnel authorizations and policy, facilities, funding, and other resources within DOD. The Director of TRICARE Management Activity, as seen in figure 1, reports to Health Affairs. TRICARE Management Activity develops and maintains the facilities planning, design, and construction criteria in support of DOD’s health care mission, and serves as the focal point for all issues pertaining to the acquisition, sustainment, renewal, and modernization of the full range of facilities within the Military Health System. Figure 1 displays the organizational structure of the Military Health System. TRICARE Management Activity is responsible for the acquisition of all military health care facilities worldwide, including the development and issuance of medical facility policy, programming, budgeting, design, and construction of all projects. Moreover, it is responsible for the development, issuance, and maintenance of health care facilities planning and technical criteria as well as the management of financial resources for all planning, design, and construction of projects. The Navy Bureau of Medicine and Surgery, the headquarters command for Navy Medicine, oversees the delivery of health care for the Navy and Marine Corps. It exercises direct control over naval hospitals, clinics, medical centers, dental centers, and preventative medicine units within the United States and overseas, and provides professional and technical guidance for the design, construction, staffing, and equipping of medical assets. Navy Medicine West is the regional command that helps manage and plan for the Navy’s health care delivery and military treatment facilities in the Pacific region. Under Navy Medicine West’s responsibility are all Navy military treatment facilities on the West coast, in Hawaii, Japan, and Guam. DOD’s Unified Facilities Criteria 4-510-01 (Unified Facilities Criteria) provide mandatory design and construction criteria for facilities in DOD’s medical military construction program. This subpart of the Unified Facilities Criteria is primarily focused on how military treatment facilities are to be designed and constructed, but also requires that the military services submit planning documentation as part of the pre-design considerations that TRICARE Management Activity uses to issue a design authorization and approve a proposed project for funding. This planning documentation includes a DD Form 1391 (Military Construction Project Data), project narrative, program for design, equipment planning, project books and an economic analysis. Design authorizations are issued to a design agent, which is designated by the Secretary of Defense as being responsible for the design and construction of proposed facilities. In the case of Guam, Naval Facilities Engineering Command is the designated design agent responsible for military construction. In addition to the above policy guidance and criteria for the planning of military treatment facilities, Office of Management and Budget guidance requires federal agencies to develop and implement internal controls to ensure, among other things, that programs achieve their desired objectives; and that programs operate and resources are used consistent with agency missions, in compliance with laws and regulations, and with minimal potential for waste, fraud, and mismanagement. Internal control, in its broadest sense, includes the plan of organization, policies, methods and procedures adopted by program management to meet its goals. In addition to the standards for internal control identified by Office of Management and Budget, GAO has also identified standards for internal controls, which include (among other things) control activities. Control activities include policies, procedures, techniques, and mechanisms that enforce management’s directives, which can include a wide range of activities such as approvals, authorizations, verifications; and documentation, which should be readily available for examination. Medical Facilities on Guam The current Naval Hospital Guam and its associated military treatment facilities, including a branch medical clinic and branch dental clinic on Naval Base Guam, help support the operational readiness of the United States and allied forces operating in the Pacific region. These facilities have been in operation for over 50 years. The naval hospital provides services for active duty servicemembers and their family members stationed on Guam. Transient active duty servicemembers, military retirees (transient and living on Guam) and their family members, National Guard members, and officials from other federal agencies also receive health care from the naval hospital. In addition to the Navy-operated military treatment facilities on Guam, the Air Force’s 36th Medical Group located at Andersen Air Force Base operates a medical and dental clinic, renovated in 2006, that delivers primary medical and dental care to DOD beneficiaries in and around Andersen Air Force Base, which is on the northern part of the island. A civilian hospital—Guam Memorial Hospital—as well as community health clinics are also on Guam. According to Navy planning documents, Military Health System beneficiaries typically do not use the services of Guam Memorial Hospital or the community health clinics, and will only be referred there by Naval Hospital Guam in the case of an emergency that occurred in closer proximity to Guam Memorial Hospital. The United States and Japan held a series of sustained security consultations, referred to as the Defense Policy Review Initiative, which were aimed at reducing the burden of the U.S. military presence on Japanese communities and strengthening the U.S.-Japan security alliance. By 2006, these consultations established the framework for the future U.S. force structure in Japan, including the relocation of military units from Okinawa, Japan to Guam. An estimated 8,600 Marines and their estimated 9,000 dependents are expected to relocate from Okinawa, Japan to Guam. In addition, the United States plans to expand the capabilities and presence of the Army, Navy, and Air Force on Guam over the next several years. As such, the military population on Guam is expected to grow by over 160 percent, from 15,000 to over 39,000 by 2020. The Deputy Secretary of Defense established the Joint Guam Program Office to facilitate, manage, and execute requirements associated with the relocation of U.S. Marine Corps assets from Okinawa, Japan to Guam. The Joint Guam Program Office is also expected to lead the coordinated planning efforts and synchronize the funding requirements between DOD components, and to work closely with other stakeholders, such as the government of Japan. The Joint Guam Program Office receives planning assistance from the Naval Facilities Engineering Command in conducting analyses and developing an acquisition strategy for infrastructure needed to support DOD operational requirements. The Naval Facilities Engineering Command executes contracts for construction and infrastructure projects including those funded by contributions from the government of Japan. To accommodate the additional inpatient and outpatient requirements resulting from the expected increase in the military population on Guam, the Navy plans to expand inpatient and outpatient care in the replacement hospital and move primary outpatient and dental care to the two new branch health clinics. According to Navy officials, the development of the requirements for the clinics allowed the Navy to retain the size and footprint of the initially planned version of the replacement hospital, which was already programmed and approved by TRICARE Management Activity in 2004, prior to the announcement of the Defense Policy Review Initiative. The hospital will be funded through DOD military construction appropriations, while the two outpatient primary care clinics are to be funded through a special Department of the Treasury account established to hold funds contributed by the government of Japan as part of the agreement to realign military units from Japan to Guam. Although the Navy’s proposed military treatment facility solution on Guam expands on the health care services currently offered on Guam, the Navy determined that patients requiring care not offered on Guam will continue to be medically evacuated to other military treatment facilities, such as Naval Hospital Okinawa, Tripler Army Medical Center in Hawaii, or Naval Medical Center San Diego. The Navy determined that to accommodate the additional inpatient and outpatient requirements for active duty and family member populations on Guam following the military buildup, it would need to construct three military treatment facilities consisting of a replacement hospital and two branch health clinics. However, prior to the announced realignment of troops from Okinawa, Japan to Guam, the Navy had already determined that the current hospital was outdated and did not meet modern facility standards such as efficient space configurations, and the building’s structure does not meet modern seismic codes. Additionally, Navy planning documents show that from a functional perspective, the current hospital is poorly designed to provide efficient health care delivery. Navy officials said that preliminary planning efforts for replacing Naval Hospital Guam started in the 1990s, but it was not until early 2004 that planning began in earnest. By 2005, the Navy was in the process of designing a replacement hospital. The Navy’s original plans for a replacement hospital were predicated on a beneficiary population of around 19,700 and were to include all outpatient primary care, including dental care, within the hospital, while closing the current branch medical clinic and branch dental clinic on Naval Base Guam. When the military realignment was subsequently announced in 2006, Navy officials said all design plans were put on hold in accordance with direction from TRICARE Management Activity, and the Navy reassessed its health care requirements for Guam. An estimated 8,600 Marines and their estimated 9,000 dependents are expected to relocate from Okinawa, Japan to Guam. With the United States’ additional plans to expand the capabilities and presence of the Army, Navy, and Air Force on Guam over the next several years, the military population on Guam is expected to grow by over 160 percent, from 15,000 to over 39,000 by 2020. When other types of Military Health System beneficiaries, such as DOD civilians and military retirees are taken into account, the eligible beneficiary population for the naval hospital is expected to grow to about 46,000 people. The hospital will replace the current hospital with expanded inpatient and outpatient care, while the new branch health clinics are to provide primary outpatient and dental care. Figure 2 provides a timeline leading up to the Navy’s recommended military treatment facility solution to meet the requirements of the expected increase in military population. The replacement hospital will be located on the site of the current hospital, while a new branch health clinic will replace the medical and dental clinics currently in operation on Naval Base Guam, and a new branch health clinic will be located in North Finegayan. In addition to these facilities, the Air Force 36th Medical Group operates a medical and dental clinic on Andersen Air Force Base, and Guam Memorial Hospital is the island’s only civilian hospital. Figure 3 shows the location of medical treatment facilities on Guam following the military buildup. The Navy determined that the branch health clinic in North Finegayan was needed to serve the Marine Corps beneficiaries that are to be housed at or near the proposed Marine Corps base. Moreover, the Navy determined that the need for expanded inpatient and outpatient capabilities at the replacement naval hospital displaced the primary care capacity to such a degree that it necessitated a need for a new branch health clinic on Naval Base Guam. The Navy expects to begin construction on the Naval Base Guam branch health clinic before the North Finegayan branch health clinic. According to Navy officials, the development of the clinics also allowed the Navy to maintain the size and footprint of the replacement hospital, the initial version of which had already been programmed and approved by TRICARE management activity. The Navy requested that since the proposed branch health clinics were required as a result of the military buildup, the government of Japan should fund the design and construction of the two facilities. The government of Japan agreed to fund the design and construction of the two clinics as part of its anticipated $6.09 billion to help develop facilities and infrastructure for the Marine Corps’ relocation to Guam. The DD Form 1391 (Military Construction Project Data) prepared for each of the branch health clinics show the total cost to construct the two clinics to be currently estimated at about $226 million. The planned hospital that will replace the current hospital is primarily focused on providing inpatient and specialty care, while the branch health clinics are to provide primary outpatient and dental care. Navy officials said that the footprint of the replacement hospital was based on the Navy’s original 2004 design for a replacement hospital because the Navy did not want to change the overall size of the hospital since significant changes would have likely delayed construction. As such, the amount of primary care available in the hospital is expected to fall below that needed for the expanded beneficiary population. However, the majority of such care is now intended to be provided by the proposed branch health clinic on Naval Base Guam and the proposed branch health clinic in North Finegayan. The replacement hospital’s configuration includes the following: Increased number of beds: Navy planning documents show that the number of inpatient beds will increase to 42 beds to accommodate the expected increase in the service member and family populations. The Navy’s planning documents for the initial proposal of the replacement hospital show that the replacement hospital prior to the announcement of the military buildup was to house 30 inpatient beds. The Navy’s updated planning documents for the replacement hospital developed in response to the buildup show that the Navy used its initial plans for 30 beds as a minimum starting point and then developed requirements for an additional 10 beds. Navy planning documents also showed that two additional intensive care beds were added to the proposed hospital subsequent to an accident aboard the U.S.S. Frank P. Cable in December 2006 which, according to the Navy, greatly taxed the capabilities of the current hospital. This resulted in a final requirement of 42 inpatient beds in the proposed replacement hospital. Expanded services: Navy officials explained that the replacement hospital will further expand its current capabilities by providing more robust orthopedic services, mental health services, and obstetrics and gynecology services. In addition, the replacement hospital will add an onsite Magnetic Resonance Imaging capability. Table 1 below shows key changes, by square footage, for the services that are to be provided at the replacement naval hospital. The Navy believes that this configuration of space and services will best meet the health care needs of the increased military population following the buildup. Navy planning documents show that the size of the replacement hospital will actually decrease from 306,000 square feet of the current hospital’s size to 282,000 square feet. According to the Navy, the compact footprint of the replacement hospital will improve proximity between related departments and increase staff efficiency as patient travel distances and facility congestion will be reduced by organizing high traffic clinic and ancillary areas closer to main entrances thereby enhancing patient care and permitting the smaller size without compromising services. In addition, clinics and inpatient activities with lower patient volume will be located on the upper floors. Updated seismic design: Navy planning documents show that there are primary life safety issues as yet unresolved in the current facility related to seismic design deficiencies. Navy plans show that the replacement hospital will be up-to-date on all applicable seismic standards and codes. Since Guam is in a region where typhoons occur, the replacement facility will also be current on all standards and codes relating to the impact from heavy winds. Flexibilities: The replacement hospital will consist of “flexible rooms” which allow for the conversion of medical/surgical rooms into intensive care rooms and vice versa. The replacement hospital will also have the flexibility to convert doctors’ offices into exam rooms and exam rooms into offices. Thus, in times of contingency or surge operations, the replacement hospital will have the flexibility to temporarily expand to up to 60 beds. The proposed branch health clinics are to provide a variety of outpatient services including the majority of primary care for the Navy’s proposed military treatment facility solution on Guam. As demonstrated in table 1 above, the majority of the primary care has been removed from the replacement hospital—it decreased by 10,491 square feet from 12,170 square feet to 1,679 square feet or by 86 percent. The 48,599 square foot Naval Base Guam branch health clinic is expected to offer several outpatient services including primary care and family practice, a pharmacy, a dental clinic, mental health services, a physical therapy clinic, preventive medicine and acute care. The 64,078 square foot North Finegayan branch health clinic will be slightly larger than the Naval Base Guam branch clinic but will offer similar services including primary care and family practice, a pharmacy, a dental clinic, mental health services, a physical therapy clinic, and preventive medicine. The Navy has completed the design of the Naval Base Guam branch health clinic and expects to begin construction on it before the North Finegayan branch health clinic, although no construction contracts have been awarded at this time for either of the two branch health clinics. The Navy’s proposed military treatment facility solution on Guam expands on the health care services currently offered on Guam, but in instances when patients require care not offered on Guam, the Navy determined that it will continue to medically evacuate them to other military treatment facilities, such as Naval Hospital Okinawa, Tripler Army Medical Center in Hawaii, or Naval Medical Center San Diego. The Navy’s documentation used to support its recommended facility solution does not clearly demonstrate to stakeholders, including TRICARE Management Activity, how the Navy determined the size and configuration of the proposed branch health clinics. To account for the population increase and support the conclusions regarding the size and configuration of the recommended facility solution, the Navy developed its health care requirements analysis report for Guam. Navy officials indicated that the health care requirements analysis clearly justifies the need for a replacement hospital and two outpatient clinics. However, although the Navy’s health care requirements analysis accounts for the expected increase in health care workload by multiplying the health care utilization rates observed in a base year for different types of beneficiaries and health care services by the anticipated beneficiary population, it does not show how this workload translates into the size and configuration of the Navy’s proposed facilities because it omits documentation on the methods and criteria for how the Navy reached staffing decisions for its proposed facilities and does not show the workload expected to be performed at each facility. Since TRICARE Management Activity is responsible for the construction of all military health care facilities worldwide as provided for in the Unified Facilities Criteria, it needs reasonable assurance that the Navy’s plans for its military treatment facility solution on Guam, including the proposed branch health clinics, meet Military Health System goals of having appropriately sized and configured facilities to meet the health care needs of military beneficiaries in a cost-effective manner. Detailed and appropriate documentation is a key component of internal controls. In addition, documentation must be clear and readily available for examination for stakeholders to make effective decisions about programs or operations. Further, without clear documentation of key analyses, stakeholders lack reasonable assurance that the Navy’s proposed military treatment facility on Guam will provide health care capacity sufficient to meet the expected increase in military population and whether the Navy is making the most cost-effective decisions. Generally, the combination of health care workload and staffing requirements are key considerations when determining the size and configuration of military treatment facilities according to the Navy’s health care requirements analysis report. DOD space planning guidance shows that, among other things, workload and staffing are used to size and configure facilities to help ensure appropriate facility space. DOD Instruction 6015.17 describes the procedures to be used by the military departments to prepare project proposals for military treatment facilities. This instruction also identifies the types of documentation needed to support a project proposal. Navy officials provided the results of their health care requirements analysis as part of their response to DOD Instruction 6015.17 when determining the size and configuration of their military treatment facilities on Guam. However, the Navy did not clearly document all the health care and staffing analyses that would support its conclusions for the size and configuration of its proposed military treatment facility solution. We were told that many of the Navy’s decisions regarding the size and configuration of its proposed military treatment facilities on Guam are justified and supported by its health care requirements analysis. The purpose of the health care requirements analysis was, in part, to develop the size and configuration of the Navy’s proposed military treatment. The Navy’s health care requirements analysis also provides an overview of the types of health care services currently offered on Guam. The health care analysis also estimates the overall health care workload for the services the Navy intends to offer on Guam following the realignment. The workload is categorized by the type of health care service and includes outpatient visits, inpatient beddays, and ancillary workload (i.e., pharmacy prescriptions and laboratory and radiology procedures) required by the anticipated beneficiary population. In addition, workload estimates are organized into different beneficiary categories including active duty per military service, expected family members per military services, and retirees, among others. The health care requirements analysis uses the overall estimated workload to recommend the types of health care services to be provided at the replacement hospital, the number of staff needed to provide these services, as well as the overall bed requirements for the hospital. The Navy’s health care requirements analysis report omits details that would help better document and support how the Navy determined the size and configuration of its recommended facility solution on Guam. Moreover, Navy officials could not adequately explain the reasons for the omissions nor how the analysis that was documented led logically to the conclusions arrived at for the Guam military health facility solution. For example, the Navy’s analysis did not contain the break down of the forecasted health care workload by each proposed facility to clearly show the portions of the DOD beneficiary population that are expected to receive primary care at each clinic, or the number of outpatient visits and the ancillary workload that are expected to be provided at each clinic. Therefore, the health care requirements analysis does not show how the Navy determined the size of the proposed outpatient clinics, given that workload is a key component of facility space requirements. In addition, the Navy’s health care requirements analysis did not include the Navy’s reasoning for continuing to meet demands for certain specialty services not provided at the naval hospital, such as neonatal intensive care, by flying patients to other military treatment facilities in the region such as those in Okinawa, Japan; Honolulu, Hawaii; or San Diego, California. Forecasting the expected health care workload for just those specific health care services expected to be offered on Guam may suffice for the purposes of sizing military treatment facilities, however it does not show the total health care requirement for DOD beneficiaries on Guam, demonstrate how the total health care requirement will be met, or provide a business case justification for the mix of services to be offered at the proposed military treatment facilities on Guam as opposed to those offered off island. Navy officials told us that in deciding what health care services to provide on Guam, they held discussions with pertinent medical officials and considered factors such as the size of the beneficiary population, the expected workload, and the availability of staff. Nonetheless, the Navy’s documentation provided to support these decisions shows that the Navy assumed no new inpatient services would be provided on Guam and only neurology would be added to outpatient care. However, this documentation does not easily allow for external stakeholder examination by TRICARE Management Activity and other stakeholders—a key aspect of internal controls—in that it does not clearly show why certain health care services were assumed to be included or excluded. The Navy reported the staffing requirements for its recommended facility solution in its health care requirements analysis, but the methods and criteria for how the Navy reached decisions are not clearly documented. DOD policy requires that manpower requirements generally (including staffing for military treatment facilities) be established at the minimum level necessary to accomplish mission and performance objectives. In the health care requirements analysis report, the Navy noted that they determined the additional staffing needs to meet health care requirements associated with the military buildup on Guam through a series of discussions with Navy headquarters, regional, and Guam medical commands. However, when we asked for additional information on how staffing requirements were determined for the proposed facilities, the Navy could not provide documentation or explain what was discussed at these meetings or the decision process leading up to their staffing requirement decisions other than stating that the limited number of available medical specialists was a key factor that influenced staffing requirements decisions for the proposed military treatment facilities on Guam. During the course of our review, we asked Navy officials to explain the assumptions used in health care requirements analysis as well as how the health care requirements analysis was used to determine the size of the replacement hospital and clinics. In some instances, the officials could not provide an explanation and said that they will request that future health care requirements analyses clearly illustrate all the steps and calculations used to determine facility requirements. In other instances, the Navy’s explanations and additional supporting documentation did not match the results of the health care requirements analysis. For example, DOD space planning guidance notes that the annual number of births of the projected beneficiary population is used, among other things, to help determine the size and configuration of labor and delivery units. However, the Navy’s health care requirements analysis used a different metric (the number of obstetrics inpatient visits). Existing documentation does not clearly demonstrate how the Navy determined the projected number of births or how the results of the health care requirements analysis report’s number of obstetrics visits would translate to the size of the replacement hospital’s labor and delivery units. Navy officials told us that the health care requirements analysis was still up-to-date, though we found that the report does not currently reflect the design plans for the proposed clinic on Naval Base Guam. For example, the design plans of the proposed clinic on Naval Base Guam indicate a projected visit rate of 64,271 visits per year. It indicates that the number of visits was derived from the health care requirements analysis. However, the health care requirements analysis does not break down the workload per facility. Therefore it is unclear how this number is supported. In addition the design plans show that 65 staff members will be working at the Naval Base clinic, whereas the health care requirements analysis projects a need for 25 staff members. Since the Navy’s health care requirements analysis is not sufficiently documented, specifically with regard to health care and staffing requirements, both the Navy and TRICARE Management Activity may not be sufficiently assured that (1) Navy’s military treatment facility solution of the replacement hospital and two branch health clinics will be adequate to meet the demand of the military population on Guam and (2) result in the most cost-effective facility solution that will meet the expected increase in military population on Guam. TRICARE Management Activity is responsible for, among other things, the acquisition of all military health care facilities worldwide, including the planning, design, and construction of all military health care projects. The Unified Facilities Criteria also provide for a process for TRICARE Management Activity to approve the design of a proposed military treatment facility project. TRICARE Management Activity issued the design authorization of the Navy’s replacement hospital in May 2008. However, according to TRICARE Management Activity officials, they were not responsible for issuing the design authorization for each clinic since the design and construction of the clinics is to be funded by the government of Japan, and TRICARE Management Activity stated that it is responsible only for projects which it funds. Since funding for the design and construction of the clinics is provided by the government of Japan, these officials said that the Joint Guam Program Office would lead the acquisition team and be responsible for ensuring compliance with the Unified Facilities Criteria. This would include issuing the design authorizations for the clinics. Conversely, officials from the Joint Guam Program Office said that projects to be constructed with government of Japan funding should follow procedures outlined in the Unified Facilities Criteria. In addition, these officials noted that the design authorizations for the clinics were provided by Naval Facilities Engineering Command headquarters, which is the design agent for military construction on Guam. However, the Unified Facilities Criteria indicate that TRICARE Management Activity is to provide design authorizations to the design agent. Moreover, the design agent is not to pursue any level of design beyond what is authorized by TRICARE Management Activity. In the case of the clinics, the design agent, Naval Facilities Engineering Command, issued its own design authorization, thereby calling into question whether the policies and procedures of the Unified Facilities Criteria were followed. Although TRICARE Management Activity did not issue the design authorizations for the clinics, the activity’s officials said they reviewed the requirements for the clinics based on results of the Navy’s health care requirements analysis. However, as stated earlier, the Navy’s health care requirements analysis did not fully document key analyses such as the forecasted workload for each of the proposed clinics and the methods and criteria for how the Navy reached the staffing decisions, raising questions about the basis for TRICARE Management Activity’s review. Conclusions The Navy determined that to accommodate the additional inpatient and outpatient requirements of the increased military population on Guam following the military buildup, it would need to construct three military treatment facilities consisting of a replacement hospital and two branch health clinics. However, the Navy’s health care requirements analysis report does not clearly document the analyses and assumptions used by the Navy to determine its military treatment facility requirements, including forecasting health care demand and determining health care workload and staffing requirements nor could Navy officials adequately explain their analyses or assumptions. Such documentation facilitates external stakeholder examination and can lead to reasonable assurance of the adequacy of facilities to meet mission requirements. Without such documentation, the Navy cannot fully demonstrate to TRICARE Management Activity and other stakeholders that its conclusions about the size and configuration of its military treatment facility solution result in the most cost-effective solution in meeting the health care needs of the expected increase in military population on Guam. In order to ensure that the Navy’s proposed branch health clinics on Guam are properly reviewed and are consistent with Military Health System goals of having appropriately sized and configured facilities to meet the health care needs of military beneficiaries in a cost-effective manner, we are recommending that the Secretary of Defense direct the Secretary of the Navy to provide clearly documented analyses to TRICARE Management Activity as part of DOD’s process for issuing design authorizations for military treatment facilities. These analyses should, at a minimum, provide details of the basis for its health care workload and staffing requirements on Guam. These documented analyses should also include the specific health care requirements to be met at each of the branch health clinics, and the methods and criteria for how staffing decisions for each facility were made. In written comments to a draft of this report, the Assistant Secretary of Defense (Health Affairs) agreed with our recommendation to have the Secretary of Defense direct the Secretary of the Navy to provide additional analyses to ensure that the Navy’s proposed branch health clinics on Guam are properly reviewed and are consistent with the Military Health System goals of having appropriately sized and configured facilities to meet the health care needs of military beneficiaries in a cost-effective manner. DOD notes that since the draft report was issued, the Navy Bureau of Medicine and Surgery has already provided additional information to the Office of the Assistant Secretary of Defense (Health Affairs) related to the planning for the two branch health clinics. In addition, the Office of the Assistant Secretary of Defense (Health Affairs) is reviewing this information and will validate the Navy analysis within the next 30 days to ensure the branch health clinics have been appropriately sized and located to meet the beneficiary health care needs. The Assistant Secretary of Defense (Health Affairs) also noted that the insights gained from this audit will be applied to future health care planning efforts for other Military Treatment Facilities throughout DOD. DOD’s comments also included input from the Navy Bureau of Medicine and Surgery to the Office of the Assistant Secretary of Defense (Health Affairs). The Bureau countered that the replacement hospital augmented by two new clinics is a highly efficient solution and that their documentation supported that conclusion. They also note that the Navy concept of care for Guam is clearly documented in the health care requirements analysis report dated February 2007, which provides the foundation for the Medical Facilities Master Planning Study, detailing the proposed facility solutions. As stated in our report, we believe that the Navy’s documentation used to support its recommended military treatment facility solution for Guam does not clearly demonstrate how the Navy determined the size and configuration of the proposed branch health clinics. The Bureau noted that its Medical Facilities Master Planning Study draws specific planning methods and data sources from the health care requirements analysis. The Medical Facilities Master Planning Study states that the health care requirements analysis provides documentation of beneficiary health care requirements and resulting facility space needs. However, as we note in our report, the health care requirements analysis does not show how these requirements translate into the size and configuration of the Navy’s proposed facilities because it omits documentation on the methods and criteria for how the Navy reached staffing decisions for its proposed facilities. Further, the Navy’s documentation, including the Medical Facilities Master Planning Study, did not contain the break down of the forecasted health care workload by each proposed facility to clearly show the portions of the DOD beneficiary population that are expected to receive primary care at each clinic, or the number of outpatient visits and the ancillary workload that are expected to be provided at each clinic, thus the need for our recommendation. DOD also provided technical and clarifying comments, which we incorporated as appropriate into this report. DOD’s comments are reprinted in their entirety in appendix II. We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director of the Office of Management and Budget. This report also is available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions, about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to (1) describe the Navy’s plans for developing a military treatment facility solution to meet the expected increases in the military population on Guam, and (2) examine the extent to which the Navy is assured that its proposed military treatment facility solution on Guam will adequately meet the requirements for the expected increase in military population. To describe the Navy’s plans for its proposed military treatment facility solution for Guam following the realignment, consisting of a replacement hospital and two branch health clinics, we reviewed Navy planning documents and interviewed relevant Department of Defense (DOD) officials. These planning documents included studies and analyses prior to the announced realignment of Marine Corps units from Okinawa, Japan to Guam, and were used by the Navy to determine the condition of existing naval military treatment facilities, and to select potential sites for the new facilities. We also reviewed the Navy’s 2007 Final Report on Health Care Requirements Analysis for Guam Navy Medical and Dental Facilities, which updated and reassessed prior Navy analyses to reflect the military population increases resulting from the proposed realignment. In addition, we obtained and reviewed the DD Form 1391 (Military Construction Project Data) for the replacement hospital and each branch health clinic. We also obtained and reviewed the Navy’s final design of the replacement hospital prior to construction and compared it with the replacement hospital construction contract issued by Naval Facilities Engineering Command. Further, we reviewed DOD’s Draft Guam Joint Military Master Plan and compared it with the Navy’s military treatment facility requirements. To corroborate the information obtained in these Navy planning documents we interviewed relevant officials from the Navy Bureau of Medicine and Surgery, Navy Medicine West, Naval Hospital Guam, Headquarters Marine Corps, Marine Corp Forces Pacific, Naval Facilities Engineering Command Marianas, Naval Facilities Engineering Command Medical Facilities Design Office, Andersen Air Force Base 36th Medical Group, Joint Guam Program Office, and TRICARE Management Activity. To examine the extent to which the Navy is assured that its proposed military treatment facility solution on Guam will adequately meet the requirements for the expected increase in military population, we obtained and reviewed applicable legal and departmental guidance, including DOD instructions and directives, and compared them with the Navy’s documented assumptions, methods, and economic cost analyses used to develop its proposed military treatment facilities requirements on Guam. We reviewed DOD Instruction 1100.4, Guidance for Manpower Management, and compared this guidance with the documentation provided to us by the Navy to support its staffing decisions for the replacement hospital and proposed branch health clinics. To determine the extent to which the Navy’s conclusions regarding the size and configuration of its proposed military treatment facilities on Guam were clearly documented to allow for external stakeholder examination, we reviewed internal control standards as described in the GAO report Internal Control: Standards for Internal Control in the Federal Government. We also reviewed Office of Management and Budget guidance that defines management responsibilities for internal controls for executive branch agencies. The primary Navy document we reviewed was the Navy’s 2007 Final Report on Health Care Requirements Analysis for Guam Navy Medical and Dental Facilities. The Health Care Requirements Analysis was developed to support the Navy’s decisions concerning its proposed military treatment facility solution and its purpose was to determine the projected facility characteristics required to support the health care needs of Military Health System beneficiaries on Guam following the proposed military buildup. As part of this review, we attempted to replicate and reproduce key calculations presented in the documentation so as to verify the planning assumptions used by the Navy and substantiate the Navy’s conclusions about the size and configuration of the facilities that comprise its facility solution. We also reviewed information used in the Navy’s economic analyses that was submitted to TRICARE Management Activity for approval of the replacement hospital. We did not independently assess the data DOD used for planning purposes; however, we discussed its reliability with DOD officials and determined that the data were sufficiently reliable to meet the objectives of this review. Additionally, to corroborate the information above, we interviewed relevant DOD officials from the Navy Bureau of Medicine Surgery, Navy Medicine West, Naval Hospital Guam, Headquarters Marine Corps, Marine Corp Forces Pacific, Naval Facilities Engineering Command Marianas, Naval Facilities Engineering Command Medical Facilities Design Office, Andersen Air Force Base 36th Medical Group, Joint Guam Program Office, and TRICARE Management Activity. We conducted this performance audit from February 2010 through March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense GOVERNMENT ACCOUNTABILITY OFFICE DRAFT REPORT DATED GAO-11-206 (GAO CODE 351440) “DEFENSE INFRASTRUCTURE: THE NAVY NEEDS BETTER DOCUMENTATION TO SUPPORT ITS PROPOSED MILITARY FACILITIES ON GUAM” NAVY BUREAU OF MEDICINE COMMENTS TO THE GOVERNMENT ACCOUNTABILITY OFFICE RECOMMENDATIONS RECOMMENDATION: In order to ensure that they Navy’s proposed branch health clinics on Guam are properly reviewed and are consistent with the Military Health System goals of having appropriately sized and configured facilities to meet the health care needs of military beneficiaries in a cost effective manner, we are recommending that the Secretary of Defense direct the Secretary of the Navy to provide clearly documented analyses to TRICARE Management Activity (TMA) as part of Department of Defense (DoD) process for issuing design authorizations for Military Treatment Facilities. These analyses should, at a minimum, provide details of the basis for its health care workload and staffing requirements on Guam. These documented analyses should also include the specific health care requirements to be met at each of the branch health clinics, and the methods and criteria for how staffing decisions for each facility were made. DoD RESPONSE: Navy Bureau of Medicine and Surgery Input to Office of the Assistant Secretary of Defense (Health Affairs)/TMA Portfolio Planning and Management Division (PPMD) The Navy concept of care for Guam supports an integrated health care delivery system with primary care medical/dental clinics operating at the major DoD installations on the island. These branch clinics are conveniently located near military family housing and other quality of life services, while the Naval Hospital serves as the central island-wide hub for inpatient and specialty care, advanced diagnostic imaging, emergency medicine, and hospital services. Navy planning studies and project documentation provided to TMA PPMD clearly validated the plan to construct two new primary care medical/dental clinics properly sized and staffed to deliver required primary care and dental services to beneficiaries at Naval Station Apra Harbor and future Marine Corps Base Finegayan. Navy planning documentation submitted to TMA PPMD adheres to the high standards for health facility planning identified by Defense Health Program guidance and instructions. The Navy concept of care for Guam is clearly documented in the Health Care Requirements Analysis (HCRA) prepared by Altarum in February 2007, which provides the foundation for the Medical Facilities Master Planning Study, detailing the proposed facility solutions. The study draws together specific planning methods and data sources from HCRA in relation to the location and facility scope of the hospital and clinics. HCRA population forecasts drive primary care clinic requirements in relation to expected population distribution and alignment with Guam installations. The requirements are expressed by space plans developed using the DoD Space and Equipment Planning System to define clinical, ancillary, and support spaces by department to create a Program for Design (PFD), which incorporates staffing. The HCRA provider staffing reflects expected primary care and dental provider empanelment ratios in relation to projected clinic beneficiaries. Navy coordinated with TMA PPMD officials to re-verify the submitted studies and documentation, as approved by TMA, fully addressed TMA requirements, including the final clinic PFD and DD 1391 project forms. Defense Health Program Military Construction funding of the robust replacement hospital augmented by two new clinics is a highly efficient solution that ensures convenient patient access to care, while mitigating traffic impacts on Guam. The Government of Japan (GOJ) funding of the two clinics will accrue beneficial cost avoidance by eliminating any GOJ need to build a separate hospital. In addition to the contact named above, Harold Reich, Assistant Director; Grace Coleman; Josh Margraf; Heather May; John Van Schaik; Kyle Stetler; and Michael Willems made key contributions to this report. Military Personnel: Enhanced Collaboration and Process Improvements Needed for Determining Military Treatment Facility Medical Personnel Requirements. GAO-10-696. Washington D.C.: July 29, 2010. Defense Infrastructure: Guam Needs Timely Information from DOD to Meet Challenges in Planning and Financing Off-Base Projects and Programs to Support a Larger Military Presence. GAO-10-90R. Washington, D.C.: November 13, 2009. Defense Infrastructure: DOD Needs to Provide Updated Labor Requirements to Help Guam Adequately Develop Its Labor Force for the Military Buildup. GAO-10-72. Washington, D.C.: October 14, 2009. Defense Infrastructure: Planning Challenges Could Increase Risks for DOD in Providing Utility Services When Needed to Support the Military Buildup on Guam. GAO-09-653. Washington, D.C.: June 30, 2009. Defense Infrastructure: High-Level Leadership Needed to Help Guam Address Challenges Caused by DOD-Related Growth. GAO-09-500R. Washington, D.C.: April 9, 2009. GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. GAO-09-3SP. Washington, D.C.: March 2, 2009. Defense Infrastructure: Opportunity to Improve the Timeliness of Future Overseas Planning Reports and Factors Affecting the Master Planning Effort for the Military Buildup on Guam. GAO-08-1005. Washington, D.C.: September 17, 2008. Defense Infrastructure: High-Level Leadership Needed to Help Communities Address Challenges Caused by DOD-Related Growth. GAO-08-665. Washington, D.C.: June 17, 2008. Defense Infrastructure: Planning Efforts for the Proposed Military Buildup on Guam Are in Their Initial Stages, with Many Challenges Yet to Be Addressed. GAO-08-722T. Washington, D.C.: May 1, 2008. Defense Health Care: DOD Needs to Address the Expected Benefits, Costs, and Risks for Its Newly Approved Medical Command Structure. GAO-08-122. Washington, D.C.: October 12, 2007. Internal Control Standards: Internal Control Management and Evaluation Tool. GAO-01-1008G. Washington, D.C.: August 1, 2001. Internal Control: Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1, 1999.
The Navy determined that its current hospital on Guam does not meet modern facility standards. Moreover, the military population on Guam is expected to grow from 15,000 to over 39,000 due to DOD plans to move Marine Corps units from Okinawa, Japan to Guam and expand other on- island capabilities. The Navy plans to construct a new hospital and two outpatient clinics as part of its facility solution to replace the current hospital and accommodate additional health care requirements. This report (1) describes the Navy's plans for developing its military treatment facility solution to meet the expected increases in the military population on Guam, and (2) examines the extent to which the Navy is assured that its proposed military treatment facility solution on Guam will sufficiently meet the requirements for the expected increase in military population. To address these objectives, GAO reviewed documentation including the Navy's plans for its military treatment facility solution and interviewed key officials within the Military Health System To accommodate the additional inpatient and outpatient requirements resulting from the expected increase in military population to Guam, the Navy plans to expand inpatient and outpatient care in the replacement hospital and move primary outpatient and dental care to two new branch health clinics. Primary outpatient care generally includes caring for acute and chronic illnesses, disease prevention, screening, patient education and follow-up care from hospitalization. The replacement hospital will be located on the site of the current hospital, while one of the new branch health clinics will replace medical and dental clinics currently in operation on Naval Base Guam, and the other clinic will be located in North Finegayan on the site of a proposed Marine Corps base. According to Navy officials, the development of the requirements for the clinics allowed the Navy to retain the size and footprint of an initially planned version of the replacement hospital, which was already programmed and approved prior to the announcement of the proposed military buildup on Guam. The two outpatient primary care clinics are to be funded by the government of Japan as part of the agreement to realign Marine Corps units from Okinawa, Japan to Guam, and DOD will fund the new hospital. The Navy's proposed military treatment facility solution on Guam expands on the health care services currently offered on Guam, but in instances when patients require care not offered on Guam, the Navy determined that it will continue to medically evacuate them to other military treatment facilities, such as Naval Hospital Okinawa, Tripler Army Medical Center in Hawaii, or Naval Medical Center San Diego. GAO found that the Navy's documentation used to support its recommended military treatment facility solution for Guam does not clearly demonstrate how the Navy determined the size and configuration of the proposed branch health clinics, nor could Navy officials adequately explain their analyses or assumptions. Navy officials indicated that the Navy's health care requirements analysis report was the basis for decisions regarding the size and configuration of the proposed military treatment facilities. The Navy's health care requirements analysis report estimates the overall health care workload for the services the Navy intends to offer on Guam following the realignment, but does not show how this workload translates into the size and configuration of the Navy's proposed facilities. Therefore, it is difficult for stakeholders to be fully assured that the facility solution will be the most cost- effective solution to meet beneficiary health care needs following the realignment. Without clear documentation of key analyses and identification of risks, the Navy cannot fully demonstrate that it is making the most cost-effective decisions with its proposed military treatment facility solution on Guam. GAO recommends that Navy clearly document the basis for health care workload and staffing on Guam. In commenting, DOD generally concurred and said that more information on the branch health clinics' planning has been developed by the Navy and is under review.
The jointly funded federal-state Medicaid program is the primary source of financing for long-term care services. About one-third of the total $228 billion in Medicaid spending in fiscal year 2001 was for long-term care in both institutional and community-based settings. States administer this program within broad federal rules and according to a state plan approved by CMS, the federal agency that oversees and administers Medicaid. Some services, such as nursing home care and home health care, are mandatory services that must be covered in any state that participates in Medicaid. Other services, such as personal care, are optional, which a state may choose to include in its state Medicaid plan but which then must be offered to all individuals statewide who meet its Medicaid eligibility criteria. States may also apply to CMS for a section 1915(c) waiver to provide home and community-based services as an alternative to institutional care in a hospital, nursing home, or intermediate care facility for the mentally retarded (ICF/MR). If approved, HCBS waivers allow states to limit the availability of services geographically, to target services to specific populations or conditions, or to limit the number of persons served, actions not generally allowed for state plan services. States often operate multiple waivers serving different population groups, such as the elderly, persons with mental retardation or developmental disabilities, persons with physical disabilities, and children with special care needs. States determine the types of long-term care services they wish to offer under an HCBS waiver. Waivers may offer a variety of skilled services to only a few individuals with a particular condition, such as persons with traumatic brain injury, or they may offer only a few unskilled services to a large number of people, such as the aged or disabled. The wide variety of services that may be available under waivers includes home modification, such as installing a wheelchair ramp, transportation, chore services, respite care, nursing services, personal care services, and caregiver training for family members. CMS’s waiver application form for states includes a list of home and community-based services with suggested definitions. States are free to include as many or as few of these as they wish, to include additional services, or to include different definitions of services from those supplied with the form. See appendix II for a list of services provided through the HCBS waivers serving the elderly and CMS’s suggested definitions of these services. To be eligible for waiver services, an individual must meet the state’s criteria for needing the level of care provided in an institution, such as a nursing home, and be able to receive care in the community at a cost generally not exceeding the cost of institutional care. States are responsible for determining the specific financial and functional eligibility criteria used, conducting the necessary screening and assessment, and arranging for services to be provided. Factors that states use in assessing functional eligibility for nursing home care and for waiver services include the individuals’ medical condition and their degree of physical or mental impairment. Other factors that states generally consider, and which may affect the states’ ability to provide care in the community at a cost not exceeding that of institutional care or to adequately protect beneficiaries’ health and welfare, include the mix of services needed by the individual, the availability of needed services, the cost of services, the need for home modification, and the availability of family members or other caregivers. In order to receive federal funds for waiver services, a state must submit an application to the Secretary of Health and Human Services (HHS) that identifies the target population, specifies the number of persons that will be served, and lists the services to be included. In addition, states are required to provide certain assurances that necessary safeguards have been taken to assure financial accountability and to protect the health and welfare of beneficiaries under the waiver. Federal regulations specify that the state’s safeguards for the health and welfare of beneficiaries must include (1) adequate standards for all providers of waiver services and (2) assurance that any state licensure or certification requirements for providers of waiver services are met. CMS requires that a state’s waiver application include documentation regarding the standards applicable for each service provider. If the only requirement for a particular provider is licensure or certification, the state must provide a citation to the applicable state statute or regulation. If other requirements apply, the state must specify the applicable standards that providers must meet and explain how the provider standards will ensure beneficiaries’ welfare. Finally, states must annually report on, among other things, how they implement, monitor, and enforce their health and welfare standards and the waiver’s impact on the health and welfare of beneficiaries. Initial waiver applications and amendments to initial waivers are reviewed and approved by CMS headquarters. CMS’s 10 regional offices have primary responsibility for reviewing and approving applications to renew waivers and amendments to renewed waivers. If CMS determines that a waiver application meets program requirements, including sufficient documentation to indicate that necessary safeguards are in place to protect the health and welfare of waiver beneficiaries, it will approve an initial waiver for a 3-year period. Subsequently, waivers may be extended for additional 5-year periods. Section 1915(c)(3) of the Social Security Act provides that, upon request of a state, HCBS waivers may be extended, unless the Secretary of HHS determines that the assurances provided during the preceding term have not been met. Among the assurances that the state makes are that necessary safeguards have been taken to protect the health and welfare of waiver participants and that the state will submit annual reports on the impact of the waiver on the type and amount of medical assistance provided under the state Medicaid plan and on the health and welfare of recipients. Regulations implementing section 1915(c) provide that an extension of a waiver will be granted unless (1) CMS’s review of the prior waiver period shows that the assurances the state made were not met and (2) the state fails to provide adequate documentation and assurances to justify an extension. In its explanation of this regulation, HCFA indicated that a review of the prior period is an indispensable part of the renewal process. Reviews of waiver programs for which a renewal has been requested are, therefore, expected to occur at some point during the initial 3-year period, and at least once during each renewal cycle. CMS guidance on the reviews calls for on-site visits that include an examination of beneficiary and provider records as well as interviews with state officials. If a state’s efforts to protect the health and welfare of waiver beneficiaries are determined to be inadequate, CMS officials told us that the agency can either bar the state from enrolling any new waiver beneficiaries until corrective actions are taken or terminate the waiver. According to a recent CMS-sponsored review, oversight of waivers is often decentralized and fragmented among a variety of agencies and levels of government, and rarely does a single entity have accountability for the overall quality of care provided to waiver beneficiaries. Some waiver service providers are regulated by state licensing agencies, some are certified by private accreditation organizations, and others operate under terms of a contract or other agreement with a state agency. While the state Medicaid agency is ultimately accountable to the federal government for compliance with the requirements of the waivers, it may delegate administration of the waivers to state units on aging, mental health departments, or other departments or agencies with jurisdiction over a specific population or service. About one-third of waivers for the elderly are administered by an agency or department other than the Medicaid agency, most often the state unit on aging. These agencies may then contract with local networks, agencies, or providers to provide or arrange for beneficiary services. Medicaid-covered HCBS services have become a growing component of state long-term care systems, with most of the growth accounted for by substantial increases in the number of HCBS waivers and the beneficiaries served through waivers. In a few states, these waivers are beginning to replace nursing homes as the dominant means for providing long-term care to the elderly under Medicaid. Over the past 10 years, total Medicaid long-term care spending has more than doubled—from $33.8 billion in fiscal year 1991 to $75.3 billion in fiscal year 2001. However, the share of spending for institutional care declined from 86 to 71 percent, while the share spent for home and community-based care grew from 14 to 29 percent. Most of the growth in home and community-based care spending under Medicaid can be accounted for by HCBS waivers. Total Medicaid home and community-based care spending grew from $4.8 billion in fiscal year 1991 to $22.2 billion in fiscal year 2001, while spending for waiver services grew from $1.6 billion in fiscal year 1991 to $14.4 billion in fiscal year 2001. As shown in figure 1, waiver spending grew from 5 percent of all Medicaid long-term care spending in fiscal year 1991 to 19 percent in fiscal year 2001. In all but two states—California and New York—and the District of Columbia, over one-half of Medicaid home and community-based services spending in fiscal year 2001 was through waivers, with a much smaller portion going to nonwaiver mandatory home health care or state plan optional personal care services. See appendix III for a summary of Medicaid long-term care expenditures by type and state. Both the number and size of HCBS waivers have grown considerably over the past 20 years. Every state except Arizona operates at least one such waiver for the elderly. In 1982, the first year of the waiver program, 6 states operated HCBS waivers. By 1992, 48 states operated a total of 155 HCBS waivers. As of June 2002, 49 states and the District of Columbia operated a total of 263 HCBS waivers, with 77 serving the elderly. The average waiver for the elderly served 3,305 Medicaid beneficiaries in 1992 and 5,892 beneficiaries in 1999. In 1999, 15 states served more than 10,000 persons in their waivers for the elderly, an increase from only 4 states in 1992. The total number of HCBS waiver beneficiaries—elderly and nonelderly— nationwide nearly tripled from 235,580 in 1992 to 688,152 in 1999, the most recent year for which data were available. The number of beneficiaries served in waivers for the elderly more than doubled from 155,349 in 1992 to 377,083 in 1999. Over this same period, the number of Medicaid beneficiaries who used some nursing home care during the year grew by only 2.5 percent from 1.57 million to 1.61 million beneficiaries. By 1999, waivers for the elderly were serving 19 percent of all Medicaid beneficiaries served either in a nursing home or through an HCBS waiver for the elderly, an increase from 9 percent in 1992. In two states, Oregon and Washington, more elderly and disabled Medicaid beneficiaries were served in HCBS waivers in 1999 than were served in nursing homes. Appendix IV includes the number of Medicaid beneficiaries served by HCBS waivers for the elderly and in nursing homes in each state. In 1999, the average per beneficiary expenditure in HCBS waivers serving the elderly was $5,567, an increase from $3,622 in 1992. However, the average per beneficiary expenditure for such waivers varied widely across states, reflecting differences in the type, number, and amount of services provided under waivers in different states. As shown in table 1, among those states with waivers serving the elderly in 1999, per beneficiary expenditures ranged from an average of $15,065 in Hawaii to $1,208 in New York. In Hawaii, one such waiver that provided an average of 85 hours of personal assistance services per month to 91 percent of beneficiaries of that waiver had an average cost of $10,893 per beneficiary. A second Hawaii waiver that provided adult foster care, residential care, or assisted living for waiver beneficiaries had an average cost of $16,958 per beneficiary. In contrast, New York’s waiver for the elderly did not include personal care or residential services; the primary benefits included social work services, personal emergency response systems, and home- delivered meals. Appendix V provides summary information on states’ HCBS waivers for the elderly, including per beneficiary expenditures. No comprehensive nationwide data are available on states’ quality assurance systems for or the quality of care provided through HCBS waivers, including those serving the elderly. In the absence of detailed federal requirements for HCBS quality assurance systems, states’ waiver applications and annual reports often contained little or no information on the mechanisms used to ensure quality, raising a question as to whether CMS had adequate information to approve or renew some waivers. More than half of the waivers serving the elderly for which we were able to obtain a CMS waiver oversight report, an annual state waiver report, or a state audit report identified oversight weaknesses and quality-of-care problems. Frequently cited quality-of-care problems included (1) failure to provide authorized or necessary services, (2) inadequate assessment or documentation of beneficiaries’ care needs in the plan of care, and (3) inadequate case management. We were unable to analyze over one- third of waivers serving the elderly because they lacked a recent regional office review, the annual state waiver report lacked the relevant information, or they were too new to have annual state reports. Although the state waiver applications and annual waiver reports we reviewed for waivers serving the elderly identified more than a dozen quality assurance approaches, many contained little or no information about how states ensure quality. For example, 11 applications for the 15 largest waivers serving the elderly identified three or fewer quality assurance mechanisms and none of these 11 waivers mentioned important approaches, including complaint systems or sanctions. Eighteen of 52 state annual waiver reports that we reviewed contained no information on the mechanisms used to help ensure quality. Moreover, when waiver applications and annual waiver reports did contain some information, the information was often incomplete. Our work in South Carolina, Texas, and Washington identified additional quality assurance mechanisms that were not listed in their waiver applications or annual reports, suggesting that such documents may understate the nature and extent of their oversight approaches. As a result, CMS’s understanding of how these states ensure quality in the waivers may be incomplete. Information provided to CMS in state waiver applications and annual reports identified a variety of mechanisms used to protect the health and welfare of beneficiaries in waivers serving the elderly. Table 2 describes 14 quality assurance approaches that states reported using in HCBS waivers for the elderly. Some of these approaches focus on the waiver beneficiary, such as case management or beneficiary satisfaction surveys. Other approaches are focused on providers, including licensure and inspections, corrective action plans, sanctions, and program manuals. States may require that certain providers be licensed or certified or meet other requirements contained in state laws or regulations. Such providers are generally subject to periodic inspections that may include a review of beneficiary records to determine whether the records meet program standards. A third set of quality assurance approaches focuses on waiver program operations, including internal or external evaluations of the waiver program, supervisory reviews of waiver beneficiary assessments and plans of care, and audits or reviews of case management agencies. Because CMS has not provided detailed guidance to states on federal requirements for HCBS quality assurance systems, the waiver applications and annual reports submitted by states to CMS for waivers serving the elderly often contained little or no information on state mechanisms for ensuring quality, raising a question as to whether CMS had adequate information to approve or renew some waivers. Waiver applications. Our review of the most current waiver applications for the 15 largest waivers serving the elderly found that many states provided CMS limited information about how they plan to protect the health and welfare of beneficiaries. Eleven of the 15 states cited three or fewer quality assurance mechanisms. For example, New York’s application only contained information about the state licensure and certification requirements for its waiver services. None of these 11 applications included well-recognized quality assurance tools such as complaint systems, corrective action plans, sanctions, or beneficiary satisfaction surveys. The remaining 4 states each identified six to eight quality assurance approaches, including at least one of these four important tools. As shown in table 3, the two mechanisms most frequently cited by states were (1) licensure for some HCBS waiver providers, such as home health agencies and residential care providers, and (2) case management. Annual waiver reports. Compared to waiver applications, annual state waiver reports identified more quality assurance mechanisms for waivers serving the elderly. The quality assurance mechanisms states’ annual reports cited most frequently included (1) audits of case management agencies, (2) reviews of provider or direct-care staff, (3) licensure and certification of providers, (4) beneficiary satisfaction surveys or interviews, (5) case management, and (6) training and technical assistance. As shown in table 3, these six mechanisms were mentioned by at least half of the 40 states that provided such information. However, as was the case with most of the 15 waiver applications we reviewed, complaint systems, corrective action plans, and sanctions were identified less frequently. For example, only 13 of the 40 states identified complaint systems for waivers serving elderly beneficiaries as a monitoring tool in their annual waiver reports. Responding to beneficiary complaints is a key element in protecting vulnerable nursing home residents and home health beneficiaries. Moreover, 18 of the elderly waiver reports (26 percent) from 12 states did not include a description of the process for monitoring the standards and safeguards under the waiver, as required on the reporting form. State officials in South Carolina, Texas, and Washington informed us they use a wider range of quality assurance mechanisms in their waiver programs than were described in either their waiver application or their annual state waiver report. Officials in Washington informed us they use 12 of the 14 mechanisms identified in table 3, yet they included only 2 of these on their application and 3 in their most recent annual report. For example, Washington operates a complaint system for waiver providers but did not refer to this approach in its waiver application or annual report. On the other hand, only Washington included reviews or audits of case managers or case management agencies in its application or annual report, yet all three states provided information on their use of this quality assurance tool during our interviews. States’ formal reports to CMS on their quality assurance mechanisms may therefore understate the nature and extent of their oversight approaches. Although information on the quality of care provided in the 79 waiver programs serving the elderly is limited, state oversight problems were identified by CMS regional offices or states in 15 of 23 waivers and quality- of-care problems in 36 of 51waivers that we were able to examine. We were unable to analyze findings related to 28 waivers serving the elderly for various reasons: they lacked a current regional office review or a waiver review report was never finalized, the annual state waiver report lacked the relevant information, or the waivers were too new to have an annual state report. Because of incomplete information and the absence of current reviews for many of the active waivers, the extent of quality-of- care problems is unknown. CMS regional office reviews or state audits identified weaknesses in state oversight for waivers serving the elderly in 15 of the 23 waivers we examined. In some cases, the waiver programs did not have essential oversight systems or processes in place. For example, in the case of a Virginia assisted living waiver that had over 1,250 beneficiaries, the Philadelphia regional office found several state oversight problems, including (1) no system in place to track the completion of the required annual resident assessments, (2) insufficient monitoring to ensure that beneficiaries were cared for in settings able to meet their needs, (3) insufficient monitoring to ensure that state standards were met for basic facility safety and hygiene, and (4) failure to inspect medication administration records sufficiently to ensure that medication was being dispensed safely and by qualified staff. The regional office identified serious lapses in Virginia’s oversight of the waiver and the protection of beneficiaries, resulting in both medical and physical neglect of waiver beneficiaries. On the basis of the regional office review findings, HCFA allowed the waiver to expire in March 2000. In other cases, states may have had an oversight system or process in place, but they were determined to be inadequate. Five state audit agency reports we reviewed identified inadequate monitoring systems in state waiver programs. For example, Connecticut had a policy in place for monitoring and evaluating its HCBS waiver program, but, from January 2000 through March 2001 it conducted no quality assurance reviews of the agencies it contracted with to coordinate and manage services for waiver beneficiaries. CMS regional office reviews and states’ annual waiver reports identified quality-of-care related problems in 36 of 51 HCBS waiver programs for the elderly that we were able to examine. Specifically, they found weaknesses in the delivery of key elements of home and community-based services that could affect waiver beneficiaries’ health and welfare (see table 4). Typically, the reports did not provide sufficient detail to demonstrate the impact of these weaknesses on waiver beneficiaries. Consequently, few, if any, specific cases of beneficiary harm were identified. The most frequently identified quality-of-care problems in waivers serving the elderly involved failure to provide authorized or necessary services, inadequate assessment or documentation of beneficiaries’ care needs in the plan of care, and inadequate case management. Provision of authorized or necessary services. Identified problems included (1) services identified in plans of care not rendered, (2) inadequate nutrition provided to waiver beneficiaries, and (3) discontinuation of services without adequate notice to beneficiaries. For example, CMS’s Dallas regional office found that significant numbers of Oklahoma waiver beneficiaries did not receive personal care services from their direct-care provider—4,303 beneficiaries (27 percent) received none of their authorized personal care services and 7,773 beneficiaries (49 percent) received only half of their authorized services. While the consequences for beneficiaries were not identified in this review, failure to provide authorized needed services may result in harm and could affect the continued ability of beneficiaries to be cared for at home. Plan of care. Issues included plans of care that (1) insufficiently addressed the needs of waiver beneficiaries, (2) were not completed or updated appropriately, and (3) were missing from beneficiaries’ files. In the review of one of the Florida waivers, CMS’s Atlanta regional office staff found several instances where needs identified through individual assessments, including significant changes in waiver beneficiaries’ conditions, were not addressed in the plan of care, a situation that could lead to beneficiaries not receiving the necessary services. Without an appropriate plan of care to direct the type and amount of services to be delivered, the waiver beneficiary may not receive an adequate level of care. Case management. Examples of case management problems included case managers who (1) were unaware of beneficiaries having lapses in delivery of care, (2) were not always aware of procedures or protocols for reporting abuse, neglect, or exploitation, (3) failed to complete resident assessments—service plans were either incomplete or inappropriate, and updates to plans of care were late, or (4) did not always appear to have a clear understanding of service definitions or requirements of the waiver or Medicaid program. CMS has not developed detailed guidance for states on appropriate quality assurance approaches as part of the initial waiver approval process. Moreover, although CMS oversight has identified some quality problems, it does not adequately monitor HCBS waiver programs or the quality of care provided to waiver beneficiaries for waivers serving the elderly as well as those serving other target populations. CMS does not hold its regional offices accountable for conducting and documenting periodic waiver reviews, nor does CMS hold states accountable for submitting annual reports on the status of quality in their waivers. As of June 2002, about one-fifth of the 228 waivers in place for 3 years or more had either never been reviewed or were renewed without a review. We found that the reviews varied considerably in the number of beneficiary records examined and the method of determining the sample, potentially limiting the generalizability of findings. According to CMS regional office staff, the allocation of staff resources and travel funding levels have at times impeded the scope and timing of their reviews. In addition, some regional office staff told us that limited travel funds have resulted in the substitution of more limited desk reviews for on-site visits and in the conduct of reviews with one staff member when two would have been preferable. CMS has a number of initiatives under way to generate information and dialogue on quality assurance approaches, but the agency’s initiatives stop short of (1) requiring states to submit detailed information on their quality assurance approaches when applying for a waiver or (2) stipulating the necessary components for an acceptable quality assurance system. CMS recognizes that insufficient attention has been given to the various mechanisms that states could and should use to monitor quality in their waiver programs. As described in appendix VI, the initiatives CMS has under way include identification of strategies that states are currently using to monitor and improve quality in home and community-based care, distribution of a guide on quality improvement and assessment mechanisms for states and regional offices, and provision of a variety of technical assistance and resources to states. The agency also has implemented a new HCBS waiver quality review protocol for use by regional offices in assessing whether state waivers should be renewed. Regional office staff told us that some states have begun to modify their approaches to quality assurance in HCBS waivers based on the use of the new waiver review protocol. For example, Washington officials established a new quality assurance unit within the agency that oversees its waiver for the elderly. In May 2002, CMS also introduced a voluntary application template for its new consumer-directed HCBS waiver that asks for a detailed description of states’ quality assurance and improvement programs, including (1) the frequency of quality assurance activities, (2) the dimensions monitored, (3) the qualifications of quality assurance staff, (4) the process for identifying problems, including sampling methodologies, (5) provisions for addressing problems in a timely manner, and (6) the system for handling critical incidents or events. While these CMS activities are intended to facilitate the development of HCBS-related quality assurance approaches, they do not constitute a consistent set of minimum requirements and guidance for states’ use to obtain approval for their HCBS programs. In addition to the lack of detailed guidance for states, CMS is not holding its own regional offices or states accountable for oversight of the quality of care provided to individuals served under HCBS waivers. CMS regional offices are expected to conduct periodic waiver reviews to determine whether states are protecting the health and welfare of waiver beneficiaries. Annual state reports are required by statute, and CMS regulations indicate that they are intended to play a key role in determining whether a waiver should be renewed. We found that regional offices are neither conducting waiver reviews prior to renewal nor obtaining complete annual state reports in a timely manner. As a result, CMS has not fully complied with the statutory and regulatory requirements that condition the renewal of HCBS waivers on states fulfilling their assurances that necessary safeguards are in place to protect the health and welfare of waiver beneficiaries. Most CMS regional offices have not conducted timely reviews of the state agencies administering waivers serving the elderly and other target populations or completed reports to document the results of their reviews. Periodic on-site reviews are used to determine, among other things, whether a state is ensuring the health and welfare of waiver beneficiaries. Guidance from CMS headquarters instructs the regional offices to conduct reviews before the first renewal of a waiver at the end of 3 years and within 5 years for subsequent waiver renewals. Eighteen percent of all HCBS waivers (42 of 228) that have been in place for 3 years or more as of June 2002 either have never been reviewed by the regional offices or had not been reviewed prior to their last waiver renewal. Approximately 132,000 beneficiaries were served by these 42 waivers in 1999. Fourteen of the 42 waivers—serving approximately 37,000 waiver beneficiaries in 1999—have had 10 or more years elapse without a regional office review (see table 5). CMS’s Dallas regional office was responsible for 9 of these 14 waivers. Over a 10-year period, a regional office should have conducted at least two reviews for each waiver. The New Mexico AIDS Waiver, initially approved in June 1987, has been in place the longest without ever being reviewed—15 years. CMS officials were aware that regional offices had not reviewed some waivers but were unaware of the extent of the problem. As of June 2002, based on an analysis of the most recent regional office review that occurred prior to October 2001 for each of the waivers, we found that 23 percent of the review reports (36 of 158) in over half of the regional offices had not been finalized. CMS requires its regional offices to prepare a final report on each HCBS review to document their findings, recommendations, and the state response. Without such a final report, there is no formal document to indicate whether a state has fulfilled the required assurances, including those related to the health and welfare of waiver beneficiaries. The New York regional office did not finalize 11 of its 12 reviews, dating back to 1998, and the San Francisco regional office did not finalize 7 of its 13 reviews, 1 of which was for a review that occurred in 1990. Without a final report documenting the review results, CMS cannot be assured that, if problems were identified, they were appropriately addressed. Many state annual waiver reports submitted to CMS regional offices are neither timely nor complete. During the interval between regional office reviews, the required annual state waiver reports provide key information on how states monitor beneficiaries’ quality of care and on any quality-of- care related problems. According to regional office officials, states routinely fail to submit these annual reports within the required time frame—within 6 months after the period covered. In August 2000, officials in CMS’s Philadelphia regional office reported that they had current annual state reports for less than half (11 of 28) of the waiver programs in their region. Our review of the most recent annual state reports for 70 of 79 HCBS waivers serving the elderly confirmed that producing these reports remains a problem: (1) reports for more than a third of the waivers were at least 1 year late—the most recent report from one of Louisiana’s HCBS waivers was for calendar year 1997, (2) reports for approximately one-fourth of the waivers provided no information on whether deficiencies had been identified through the monitoring processes, and (3) five reports indicated that deficiencies had been identified but provided no additional information about the nature of or response to the problems. CMS headquarters has no central repository for annual state reports but is in the process of establishing a centralized database for state report information sometime in 2003, a development that could facilitate ongoing monitoring of the timeliness and completeness of these reports. Our analysis of CMS’s oversight activities for the 15 largest HCBS waivers serving the elderly demonstrates the extent of oversight weaknesses. Overall, 8 of the 10 CMS regional offices provided inadequate oversight for 13 of these 15 largest state waivers for the elderly, which, in 1999, served about 215,000 beneficiaries—over half (57 percent) of the total elderly waiver beneficiary population at that time (see table 6). We found that Four of the 15 HCBS waivers were not reviewed in a timely manner by the CMS regional office—none of the 4 had reviews for 8 or more years and yet were renewed. Four of the 15 waivers had no waiver review final report completed by the regional office. Two of the reviews occurred in 1999, and for the remaining 2 waivers the regional office could not tell us the date of the reviews or whether a final report was available. Four of the 15 waivers lacked a timely annual state report to the regional office. As of April 2002, the most recent annual report for these 4 waivers was either for the waiver period ending August 1999 (1 waiver) or September 2000 (3 waivers). Seven of the 15 waivers had annual state reports that were incomplete because they either lacked information on their quality assurance mechanisms or on whether deficiencies had been identified. The limited scope and duration of periodic regional office waiver reviews raise a question about the confidence that can be placed in findings about the health and welfare of waiver beneficiaries. CMS regional offices conduct reviews using guidance provided by headquarters. The guidance instructs regional office staff to review beneficiary records; interview waiver beneficiaries, primary direct-care staff of waiver providers, and case managers; and observe waiver beneficiaries and the interaction between the beneficiary and direct-care staff. This guidance was updated in January 2001 when use of the new HCBS waiver quality review protocol became mandatory. However, the new protocol does not address important operational issues such as an adequate sample size or sampling methodology for the beneficiary record reviews and interviews to provide a basis for generalizing the review findings; whether the sample should be stratified according to the different groups served under the waiver (i.e., for a waiver serving both the elderly and the disabled, selecting a stratified sample based on the proportion of persons aged 65 and over and those aged 18 to 64 with disabilities); and the appropriate duration of an on-site review, taking into consideration the number of sites and beneficiaries covered in the waiver. Our analysis of regional office review reports for 21 HCBS waivers serving the elderly found that the reviews varied considerably in the number of beneficiary records evaluated and their method of determining the sample, potentially limiting their ability to generalize findings from the sample to the universe of waiver beneficiaries. Specifically, we found a wide range of sample sizes in 15 of the 21 regional office reviews that included such information. The sample sizes for record reviews ranged from 14 beneficiaries (of 73 served) in the Boston regional office review of the Vermont waiver to 100 beneficiaries (of 24,000 served) in the Seattle regional office review of the Washington waiver. (See app. VII for a summary of the sample sizes in the regional office reviews.) Eleven of the 15 CMS waiver review reports included information on the specific number of beneficiaries interviewed or observed during the review; however, we could not determine whether beneficiary interviews or observations had been conducted in other waiver reviews. The method by which the beneficiary record review samples were selected varied, with some regional offices using randomized sampling methods, some basing their sample on geographic location, and others reporting no method of sample selection. For most of these same 15 waivers serving the elderly, we found that the regional staff typically spent 5 days conducting the waiver review— regardless of the number of waiver beneficiary records sampled or the overall size of the waiver. However, the Seattle regional office staff conducted only three reviews in the past 4 years, targeting its largest HCBS waivers. For example, the regional office has spent 3 to 4 weeks per waiver for the on-site portion of the review and another week for state agency interviews and review of documents. Generally, the number of beneficiary records reviewed and beneficiaries interviewed is dependent on (1) the number of days allocated to the waiver review by a regional office and (2) the number of regional office staff members available. The limited number of assigned staff and available clinical specialists, coupled with insufficient travel funds allocated to regional office oversight of HCBS waivers, have contributed to the timeliness and scope problems we identified. According to regional offices, the level of attention given to HCBS waiver oversight, including periodic reviews when waivers come up for renewal, is at the discretion of regional office management and competes with other workload priorities. In August 2000, some regional office officials formally communicated to HCFA headquarters their concern that the agency was not devoting sufficient resources to properly monitor the quality of HCBS waiver programs. Regional office officials responsible for waiver oversight told us that the number of staff available for waiver oversight has not kept pace with the growth in the number of waivers and beneficiaries served and that resource issues remain a key challenge for waiver oversight. We found that CMS regional offices differed substantially in the number of staff assigned to waiver oversight and the extent to which staff with clinical or program expertise were assigned to waiver oversight. According to Dallas, Denver, and Philadelphia regional office staff, the level of resources allocated by the regional offices for such reviews dictated the number of waiver beneficiary records reviewed or beneficiary interviews conducted. Six of the 10 regional offices had two or fewer full- time-equivalent (FTE) staff assigned to monitoring HCBS waivers (see table 7). Moreover, we found that the number of regional office staff assigned to monitoring HCBS waivers bore little relationship to the waiver workload. For example, the Chicago regional office had six FTE staff to monitor 34 HCBS waivers with 131,902 waiver beneficiaries, while the Dallas regional office had one-and-a-half FTE staff for 28 HCBS waivers with 63,614 waiver beneficiaries. Until a few years ago, one person in the Philadelphia regional office was assigned to oversee HCBS waivers— despite growth in the number and size of the region’s HCBS waivers over the past decade. As shown in table 7, 3 of the 10 regional offices had specialists assigned to waiver oversight, such as registered nurses or qualified mental retardation professionals. When asked to identify one of the greatest improvements that could be made in federal waiver oversight, 3 of the 10 regional offices identified the direct assignment of specialist staff. CMS’s waiver review protocol specifies that the participation of clinical and other specialist staff is important to assessing issues related to beneficiaries’ health and welfare. However, many regional offices indicated that they had to “borrow” specialist staff from other departments within the region in order to conduct their waiver reviews. The Seattle and Boston regional offices provide contrasting examples of the role played by regional office management in obtaining clinical staff to conduct reviews. According to Seattle regional office staff, it has been a challenge to obtain specialist staff on the waiver review teams. For 4 to 5 years, the region did not conduct any HCBS waiver reviews. In the past 4 years, it has only conducted three reviews—regardless of the number of waivers due for review. The region has four waivers that have never been reviewed, two dating back to 1989. According to the staff, the prior regional administrator did not target resources for HCBS waiver reviews, and it was difficult to obtain clinical and other specialist staff from other departments to assist in conducting reviews. Although it has no specialist staff assigned to waivers, Boston regional office officials informed us that conducting HCBS waiver reviews has been a management priority, as evidenced by the fact that the region always includes a registered nurse or other relevant specialist on the review team. We noted that the Boston regional office has conducted timely reviews of all of its waivers. When asked to identify the greatest challenges related to HCBS waiver oversight, 4 of the 10 CMS regional offices identified insufficient travel funding. Regional office staff indicated that there appears to be no correlation between the amount of travel dollars made available by the regional offices for the reviews and the review schedule set forth by CMS headquarters. Moreover, they told us that they had to compete for limited travel resources with the regional office staff responsible for overseeing nursing homes. Regional office responses to inadequate travel funds have included (1) conducting a “desk review” without visiting state agency officials, providers, and waiver beneficiaries, (2) limiting the number of days allotted for the review, (3) reducing the number of staff assigned to conduct the review, or (4) not reviewing a particular waiver at all. In the New York regional office, a lack of travel funds led to desk reviews for 9 of 15 waivers. According to the Philadelphia regional office’s final report for a Virginia HCBS waiver, some cases that should have been pursued were not reviewed because only 1 week had been allotted for fieldwork, and 2 of the 18 cases selected for field review were dropped because there was insufficient time to conduct the review. In 2001, the Chicago regional office conducted a limited on-site review of a Michigan HCBS waiver serving over 6,000 beneficiaries. During the review, three case files were examined and one beneficiary was interviewed. According to Denver regional office officials, travel budget problems have meant that the reviews are conducted by one staff member when two would be preferable. HCBS waivers give states considerable flexibility to establish customized programs offering long-term care services for specific populations, such as elderly persons, persons with mental retardation, or children with special needs. While maintaining this flexibility is important, insufficient emphasis has been placed on balancing flexibility with measures to ensure accountability. At present, states may obtain a waiver serving the elderly with a limited explanation of how they plan to monitor quality, and CMS has not held states accountable for submitting complete and timely annual waiver reports detailing their quality assurance activities. Moreover, CMS has not fully complied with the statutory and regulatory requirements that condition the renewal of HCBS waivers on whether the state has fulfilled its assurances that necessary safeguards are in place to protect the health and welfare of waiver beneficiaries. The current size and likely future growth in HCBS waiver programs that serve a vulnerable population— particularly elderly individuals eligible for nursing home placement—make it even more essential for states to have appropriate mechanisms in place to monitor the quality of care. While CMS requires periodic reviews of state waiver programs to help ensure that beneficiaries’ health and welfare are adequately protected, many have been renewed without such a review. In addition, guidance on how these waiver reviews should be conducted does not address important operational issues such as sample size and sampling methodology. Consequently, there is little relationship among the amount of time spent on-site conducting waiver reviews, the number of beneficiary records reviewed, and the number of beneficiaries served. CMS expects its regional offices to interview and observe waiver beneficiaries to obtain a first-hand perspective on care delivery and the adequacy of case management, but beneficiary interviews are not a component of all regional office reviews. Moreover, staff resources and travel funds currently allocated to conduct waiver reviews are insufficient. Without necessary attention from CMS, these guidance and resource issues will only be exacerbated by the expected future growth in the number of persons served through HCBS waiver programs. CMS has a number of initiatives directed towards improving quality and quality assurance for home and community-based waiver programs. They do not, however, address the specific oversight weaknesses we have identified in this report, such as the lack of detailed criteria or guidance for states regarding the necessary components of a quality assurance system to help ensure the health and welfare of waiver beneficiaries. To ensure that state quality assurance efforts are adequate to protect the health and welfare of HCBS waiver beneficiaries, we recommend that the Administrator of CMS develop and provide states with more detailed criteria regarding the necessary components of an HCBS waiver quality assurance system, require states to submit more specific information about their quality assurance approaches prior to waiver approval, and ensure that states provide sufficient and timely information in their annual waiver reports on their efforts to monitor quality. To strengthen federal oversight of the growing HCBS waiver programs and to ensure the health and welfare of HCBS waiver beneficiaries, we recommend that the Administrator ensure allocation of sufficient resources and hold regional offices accountable for conducting thorough and timely reviews of the status of quality in HCBS waiver programs, and develop guidance on the scope and methodology for federal reviews of state waiver programs, including a sampling methodology that provides confidence in the generalizability of the review results. We provided a draft of this report to CMS and South Carolina, Texas, and Washington, the three states in which we obtained a more in-depth perspective on states’ quality assurance approaches. (CMS’s comments are reproduced in app. VIII.) CMS affirmed its commitment to its ongoing responsibility, in partnership with the states, to ensure and improve quality in HCBS waivers. The agency stated that the federal focus should be on assisting states in the design of HCBS programs, respecting the assurances made by states, improving the ability of states to remedy identified problems, providing assistance to states to improve the quality of services, and thereby assisting people to live in their own homes in communities of their choice. CMS generally concurred with our recommendations to improve state and federal accountability for quality assurance in HCBS waivers but raised concerns about our definition of quality, how best to ensure quality in state waiver programs, the appropriate state and federal oversight roles, and the resources and guidance required to carry out federal quality oversight. CMS stated that the draft report’s definition of quality in waivers was too narrow because it ignored a wide variety of activities used to promote quality. Furthermore, CMS cited the availability of a broad array of waiver services with choice over how, where, and by whom services are delivered as important to beneficiaries’ quality of life. According to CMS, growth in the number of persons served by HCBS waivers was evidence of beneficiary satisfaction. (See CMS’s “General Comments,” 2 and 3.) Rather than defining quality ourselves, we reported the approaches states used to assure quality in their waiver programs. By analyzing state applications for waivers serving the elderly and state annual waiver reports, we identified a broad array of state quality assurance activities, including licensing and certification of providers and beneficiary satisfaction surveys (see tables 2 and 3). We disagree with CMS’s assertion that beneficiaries’ preference for services that allow them to remain in the community can be equated with satisfaction for the services delivered. Even assuming that beneficiary satisfaction alone is a reliable indicator of quality, CMS offered no empirical evidence to support its position. Only about half of the state annual waiver reports we reviewed indicated that states measured beneficiary satisfaction with services. Moreover, our review of quality-of-care problems identified in waiver programs serving the elderly demonstrated that failure to provide needed or authorized services was a frequently cited problem. For example, as we noted in the draft report, a CMS review found that 27 percent of beneficiaries served by one state’s HCBS waiver for the elderly did not receive any of their authorized personal care services, and 49 percent received only half. CMS commented that the draft report failed to recognize that HCBS programs require a different approach to quality than their institutional alternatives and “leaves the distinct impression that the most effective way to assure and improve quality is through the process of inspection and monitoring.” CMS asserted that design of an HCBS waiver, as opposed to monitoring its implementation, is the most important contributor to quality, and the agency’s recent efforts have focused on working with states to improve design decisions and design options. (See CMS’s “General Comments,” 4 and 7.) We disagree with CMS’s characterization of our findings. Our report recognizes the importance of maintaining states’ considerable flexibility in ensuring quality in HCBS waivers but concludes that insufficient emphasis has been placed on balancing this flexibility with measures to ensure the accountability called for by both statute and regulations. Contrary to CMS’s comments, we did not recommend an additional or increased federal oversight role or the adoption of oversight systems such as those used for institutional providers. Our analysis and conclusions were based on the criteria established in both statute and regulations that entail federal oversight of waivers and that condition federal approval and renewal of waivers on states’ demonstrating to CMS that they have established and are fulfilling assurances to protect the health and welfare of waiver beneficiaries. We found that CMS currently receives too little information from states about their quality assurance approaches to hold them accountable, raising a question as to whether the agency has adequate information to approve or renew some waivers. While we agree that waiver design is important to ensuring quality, a state’s implementation of its quality assurance approaches is equally, if not more, important. In its protocol for reviewing states’ HCBS waivers, CMS gives equal emphasis to both the design and implementation of quality assurance mechanisms. Despite its concerns, CMS generally concurred with our recommendation to develop and provide states with more detailed criteria regarding the necessary components of an HCBS waiver quality assurance system. CMS cited its current effort to provide such guidance and indicated that it would work to more clearly define its criteria and expectations for quality. CMS commented that “the report lends itself to the conclusion that the federal government ought to be the primary source of quality monitoring and improvement, and fails to recognize that the federal statutes convey respect for state authority and competence in the administration of HCBS programs.” (See CMS’s “General Comments,” 6.) We agree that the states and the federal government have distinct quality monitoring roles but believe that CMS has mischaracterized our description of those roles as defined in statute and regulations. In addition, we believe that CMS has understated the importance of federal oversight. The report describes states’ statutory and regulatory responsibility to (1) include information in their waiver applications on their approaches for protecting the health and welfare of HCBS beneficiaries and (2) report annually on state quality assurance approaches and deficiencies identified through state monitoring. We reported that waiver applications contained limited information on state quality assurance approaches and that many state annual waiver reports were neither timely nor complete. Eleven of the 15 applications for the largest waivers serving the elderly included none of the following well-recognized quality assurance tools: complaint systems, corrective action plans, sanctions, or beneficiary satisfaction surveys. Annual reports for more than a third of 70 waivers serving the elderly were at least 1 year late, and one-quarter of such reports did not indicate whether deficiencies had been identified, as required. CMS acknowledged the need for more comprehensive information from states at the time of application and at subsequent renewals. Consistent with our recommendation, CMS agreed to revise and improve the application process and annual state waiver report to include more information on states’ quality approaches and activities. The report also describes CMS’s statutory responsibility for ensuring that states adequately implement their quality assurance approaches—a responsibility operationalized in policy guidance to the agency’s regional offices. Waiver reviews are expected to occur at least once during the initial 3-year waiver period and during each 5-year renewal cycle. We did not propose an expanded federal quality assurance role. We reported that, in some cases, CMS had an insufficient basis for determining that states had met the required assurances for protecting beneficiaries’ health and welfare. As of June 2002, almost one-fifth of all HCBS waivers in place for 3 years or more had either never been reviewed or were renewed without a review; 14 of these waivers had 10 or more years elapse without a regional office review. Some CMS waiver reviews have uncovered serious state oversight weaknesses as well as quality-of-care problems. For example, the review of one state’s waiver found both medical and physical neglect of beneficiaries because of serious lapses in state oversight, resulting in a decision to let the waiver expire. The full extent of such problems is unknown because many state waivers lacked a recent CMS review. CMS did not comment directly on our conclusion that the agency is not fully complying with statutory and regulatory requirements when it renews waivers. The agency suggested it would be far more efficient and equally effective for federal waiver reviews to focus on only one waiver in cases where there are multiple waivers in a state serving subsets of the same target group and using the same quality assurance system; however, CMS’s own guidance to its regional offices calls for each waiver to receive at least one full review during a given waiver cycle, with each waiver receiving at least some level of review. CMS commented that the draft report’s recommendations to hold regional offices accountable for conducting thorough and timely reviews of quality in HCBS waiver programs, including a sampling methodology that provides confidence in the generalizability of the review results, would require a huge new investment or redirection of federal resources. Specifically, CMS commented that the report “does not address the significant resources that would need to be found or redirected to implement its recommendations” and “fails to acknowledge the lack of appropriated funds for HCBS quality.” The agency stated that such funds would have to come from CMS’s operating budget. CMS also pointed out that it had already taken steps organizationally to ensure that enough resources are devoted to quality and that they are appropriately positioned within CMS. (See CMS’s “General Comments,” 5, 8, and 9.) CMS’s existing waiver review protocol directs regional offices to select a sample of waiver beneficiaries for activities such as interviews and observations, but it does not adequately address sampling methodology. We found that sample selection methods varied with some regional offices selecting random samples, some basing their sample on geographic location, and others reporting no methodology for sample selection. Given that the regional offices are already generalizing their findings to the waiver program as a whole, we believe explicit and uniform sample selection guidance is imperative. At the same time, we believe that, as CMS suggested, samples may appropriately be targeted to certain types of participants or services so that, over time, greater assurances are provided about the quality of care. In response to our recommendation to develop guidance on the scope and methodology for federal reviews of state waiver programs, CMS said it is committed to developing additional policy guidance. We did not recommend significant increases in appropriated funds for conducting waiver reviews. Rather, our draft report recommended that CMS ensure allocation of sufficient resources and hold regional offices accountable for conducting thorough and timely reviews of the status of quality in HCBS waiver programs. The CMS Administrator is responsible for assessing whether existing funding levels are adequate to satisfy statutory and regulatory requirements, including periodic regional office review of the states’ assurances. The Administrator may indeed conclude that, to carry out these oversight responsibilities for the growing numbers of frail beneficiaries who prefer and rely on these services, there may be a need to reallocate existing funds or to request additional funds. CMS also noted that it had recently redeployed and reorganized headquarters staff to incorporate the quality function into each program area, including the operational unit that oversees HCBS waivers. Despite CMS’s concerns about the need for significant funding increases, the agency noted the importance of further investments to advance both state and federal capability to assure quality in waiver programs. CMS commented that the draft report had numerous technical inaccuracies, but cited only one and provided no additional examples or technical comments to accompany its written response (CMS’s “General Comments,” 1). Although CMS stated that our characterization of federal requirements concerning waiver renewals was inaccurate, its suggested changes and our report language were essentially the same. To avoid any confusion, however, we have added the statute’s specific language to the background section of the report. CMS further commented that our report should recognize that the Congress created an enforcement mechanism that places great reliance on a system of assurances. Our draft report made that point while also describing CMS’s responsibility, as specified in its implementing regulations, to determine that each state has met all the assurances set forth in its waiver application before renewing a waiver. CMS stated that the draft report failed to acknowledge the steps it has already taken to ensure quality. (CMS’s “General Comments,” 10.) To the contrary, the draft report described each of the efforts CMS referred to as under way to monitor and improve HCBS quality and addressed each activity: the waiver review protocol, the HCBS quality framework, the development of tools to assist states, development of the Independence Plus template, and the national technical assistance contractor. However, we found that CMS’s waiver review protocol does not address key issues relating to the scope and methodology of federal oversight reviews. Moreover, the use of the Independence Plus template, which requires more specific information on states’ quality assurance approaches, is voluntary rather than mandatory. In its written comments, Texas stated that it supports proper federal oversight of HCBS waivers but stressed the need to maintain flexibility in designing waivers to meet the unique needs of residents requiring community care. The state believes that such flexibility should not be lost in establishing more specific quality assurance criteria. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7118 or Walter Ochinko at (202) 512-7157 if you have questions about this report. Other contributors to this report included Eric Anderson, Connie Peebles Barrow, and Kevin Milne. This appendix describes our scope and methodology, following the order that our results are presented in the report. Data on HCBS Waivers. To identify the universe of state HCBS waivers as of June 2002, we asked the CMS regional offices to identify each waiver, including the target population and the waiver start date. The regional offices identified a total of 263 waivers. Using this information and other data, we identified 77 waivers serving the elderly. To identify trends in Medicaid long-term care and Medicaid waiver spending, we analyzed data covering fiscal years 1991 through 2001 from HCFA reports (HCFA Form 64) compiled by The MEDSTAT Group. To identify trends in the overall number of Medicaid waiver beneficiaries, number of elderly waiver beneficiaries, average waiver size, and average per beneficiary expenditures for waivers serving the elderly, we analyzed data from state annual waiver reports (HCFA Form 372) covering fiscal years 1992 through 1999 in a database compiled by researchers at the University of California, San Francisco. State Quality Assurance Mechanisms. In the absence of comprehensive, readily available information on HCBS quality assurance mechanisms that states use, we analyzed the information available in a subset of state waiver applications and annual state waiver reports for waivers serving the elderly. Specifically, we analyzed (1) initial and/or renewal applications for the 15 largest waivers serving the elderly as of 1999 and (2) annual state waiver reports from 70 of the 79 waivers serving the elderly. The waiver applications are used by CMS, in part, to assess whether the quality assurance mechanisms in place warrant waiver approval. The annual waiver reports are required to provide a description of the process for monitoring the standards and safeguards under the waiver and the results of state monitoring. Of the 70 state annual waiver reports that we analyzed, 52 contained some information about states’ monitoring processes. Eight of the remaining 9 annual waiver reports were new waivers for which the state had not yet submitted an annual report, and for 1 waiver, a regional office did not provide a copy of the annual state report. State Oversight and Quality of Care. To assess state oversight issues in waivers serving the elderly, we examined regional office waiver review reports for 21 waivers and state audit reports related to 5 waivers, the only reports we were able to analyze, for a total of 23 discrete waivers. To assess quality-of-care problems in waivers serving the elderly, we reviewed 51 waivers for which we were able to analyze regional office final reports and annual state reports. Regional office waiver review reports identified problems in 19 waivers, and annual state reports identified problems in 22 waivers, for a total of 36 discrete waivers. These reports identified no quality-of-care problems in the remaining 15 waivers. We were unable to analyze findings from 28 additional waivers because they either (1) lacked a recent regional office waiver review completed during the period of October 1998 through May 2002 or an annual state waiver report, (2) the annual state waiver report did not address whether deficiencies had been identified, or provided no information on the deficiencies found, or (3) the waivers were too new to have had a regional office review or to provide an annual state report. CMS Oversight. To determine the adequacy of CMS regional office oversight of states’ waiver programs, we asked all 10 CMS regional offices to provide the following information for each of the waivers for which they were responsible, including both waivers for the elderly as well as those serving other target populations: (1) the waiver start date, (2) the current waiver time period, (3) the fiscal year the waiver was last reviewed, and (4) whether or not the waiver review report was finalized. Of the 263 waivers, 228 had been in place for 3 years or more and therefore should have had a regional office review. The other 35 waivers were less than 3 years old and would not have yet qualified for a review as of June 2002. For information on sample sizes and duration of the reviews, we analyzed CMS’s HCBS waiver review final reports for waivers serving the elderly that were issued during the period of October 1998 through May 2002. Fifteen of the 21 waiver review reports that we received included information on the number of waiver beneficiary records reviewed and on the duration of the reviews. Some review reports also provided the number of beneficiaries that were interviewed or observed. We also discussed regional office oversight activities with CMS headquarters’ staff. Table 8 contains a list of services provided through the HCBS waivers serving the elderly and the suggested CMS definitions. However, states may provide alternative definitions in their waiver applications. Percent of beneficiaries served by waivers for the elderly Arizona does not have any HCBS waivers for the elderly as it operates its Medicaid program as a demonstration project under a section 1115 waiver. In 1999, the District of Columbia did not have any HCBS waivers for the elderly in operation. Number of beneficiaries served by waivers for the elderly Arizona does not have any HCBS waivers for the elderly as it operates its Medicaid program as a demonstration project under a section 1115 waiver. With the exception of the number of waivers for the elderly, the data for this state are based on author’s estimates. See Harrington, Aug. 2001. In 1999, the District of Columbia did not have any HCBS waivers for the elderly in operation. CMS has undertaken a series of initiatives to generate information and dialogue on existing systems of quality assurance in HCBS waivers and to provide a range of assistance to states in this area. Approximately $1 million was budgeted for these HCBS quality initiatives in fiscal year 2001 and $3.4 million in fiscal year 2002. Through its HCBS quality initiatives, CMS intends to more closely assess the status of quality assurance efforts currently in place and to provide direct assistance to states in this area. CMS’s initiatives include (1) developing a conceptual framework for defining and measuring quality, (2) creating tools for states to adapt and use in assessing quality, such as model consumer experience surveys, and (3) providing technical assistance and resources for quality assurance and improvement. These initiatives, while important, do not address the lack of detailed requirements for states on the necessary components of an acceptable quality assurance system or the weaknesses in regional office oversight of state HCBS waivers that we identified elsewhere in this report. Quality Framework and Expectations. CMS sponsored the development of a framework for quality in home and community-based services that focuses on outcomes in several key areas including beneficiary access to care, safety, satisfaction, and meeting beneficiary needs and preferences. The next phase involves identifying strategies that states are currently using to monitor and improve quality within these key areas. While the expectations contained in the quality framework have not been specified in CMS regulations, they are reflected in the application template for CMS’s new consumer-directed HCBS waiver, Independence Plus. States’ use of the template for the Independence Plus waiver is voluntary. The template asks states for a detailed description of their quality assurance and improvement programs—something not currently required as part of the general HCBS waiver application. Guidance for using the template notes that the description should include (1) information on the frequency of quality assurance activities, (2) the dimensions that will be monitored, (3) the qualifications of persons conducting quality assurance activities, (4) the process for identifying problems, including sampling methodologies, (5) provisions for assuring that problems are addressed in a timely manner, and (6) the system to receive, review, and act on critical incidents or events. Quality Assurance Mechanisms. CMS is also developing quality assessment and improvement mechanisms for states. For example, to develop a guide for states and CMS regional offices, a contractor reviewed the literature on quality measurement and improvement in home and community-based care, convened an expert panel, and conducted interviews with state officials. As of April 2003, the guide was undergoing final clearance within CMS. It is expected to include (1) benchmarks for effective quality assurance programs in home and community-based care, (2) a discussion of the knowledge and mechanisms needed to design, implement, and assess quality activities in home and community-based care, and (3) suggestions for addressing limitations and problems in assuring quality in home and community-based care. Another contractor has developed and field-tested consumer experience surveys for use in waiver programs for the elderly and for persons with developmental disabilities. This contractor is also developing a set of performance indicators for states to use in guiding development and assessing quality in new self-directed HCBS waivers. Technical Assistance and Resources. Other CMS efforts focus on providing technical assistance and resources to states. One contractor has assembled a team of professionals with expertise in home and community- based services that can serve as a resource for both states and the CMS regional offices. Services available from these teams are expected to include conducting targeted reviews of waiver programs; providing suggestions to states regarding their quality assurance activities; consulting with CMS staff regarding quality aspects of specific waivers; and providing resource materials on quality assurance monitoring and improvement tools. This contractor is also assessing the types of data currently gathered by a sample of states that is, or could be, used for quality measurement and improvement; compiling information on selected data-driven state quality efforts; and providing technical assistance to the states. Finally, CMS sponsored a national conference on HCBS quality measurement and improvement in May 2002. This day-and-a-half-long conference—attended by state officials, CMS staff, and others—offered training and information on strategies and techniques for quality assurance and improvement in home and community-based care. review (days) The regional office review contained no information on beneficiary interviews or observations. This waiver review was conducted at the regional office rather than on-site at the relevant state agencies.
Home and community-based settings have become a growing part of states' Medicaid long-term care programs, serving as an alternative to care in institutional settings, such as nursing homes. To cover such services, however, states often obtain waivers from certain federal statutory requirements. GAO was asked to review (1) trends in states' use of Medicaid home and community-based service (HCBS) waivers, particularly for the elderly, (2) state quality assurance approaches, including available data on the quality of care provided to elderly individuals through waivers, and (3) the adequacy of federal oversight of state waivers. GAO is recommending that the Administrator of CMS take steps to (1) better ensure that state quality assurance efforts are adequate to protect the health and welfare of HCBS waiver beneficiaries, and (2) strengthen federal oversight of the growing HCBS waiver programs. Although CMS raised certain concerns about aspects of the report, such as the respective state and federal roles in quality assurance and the potential need for additional federal oversight resources, CMS generally concurred with the recommendations. From 1991 through 2001, Medicaid long-term care spending more than doubled to over $75 billion, while the proportion spent on institutional care declined. Over a similar time period, HCBS waivers grew from 5 percent to 19 percent of such expenditures--from $1.6 billion to $14.4 billion--and the number of waivers, participants, and average state per capita spending also grew significantly. Since 1992, the number of waivers increased by almost 70 percent to 263 in June 2002, and the number of beneficiaries, as of 1999, had nearly tripled to almost 700,000, of which 55 percent were elderly. In the absence of specific federal requirements for HCBS quality assurance systems, states provide limited information to the Centers for Medicare & Medicaid Services (CMS), the federal agency that administers the Medicaid program, on how they assure quality of care in their waiver programs for the elderly. States' waiver applications and annual reports for waivers for the elderly often contained little or no information on state mechanisms for assuring quality in waivers, thus limiting information available to CMS that should be considered before approving or renewing waivers. GAO's analysis of available CMS and state waiver oversight reports for waivers serving the elderly identified oversight weaknesses and quality of care problems. More than 70 percent of the waivers for the elderly that GAO reviewed documented one or more quality-of-care problems. The most common problems included failure to provide necessary services, weaknesses in plans of care, and inadequate case management. The full extent of such problems is unknown because many state waivers lacked a recent CMS review, as required, or the annual state waiver report lacked the relevant information. CMS has not developed detailed state guidance on appropriate quality assurance approaches as part of initial waiver approval. Although CMS oversight has identified some quality problems in waivers, CMS does not adequately monitor state waivers and the quality of beneficiary care. The 10 CMS regional offices are responsible for ongoing monitoring for HCBS waivers. However, CMS does not hold these offices accountable for completing periodic waiver reviews, nor does it hold states accountable for submitting annual reports on the status of waiver quality. Consequently, CMS is not fully complying with statutory and regulatory requirements when it renews waivers. As of June 2002, almost one-fifth of waivers in place for 3 years or more had either never been reviewed or were renewed without a review; for an additional 16 percent of waivers, reports detailing the review results were never finalized. Regional office personnel explained that limited staff resources and travel funds often impede the timing and scope of reviews. While regional office reviews include record reviews for a sample of waiver beneficiaries, they do not always include beneficiary interviews. The reviews also varied considerably in the number of beneficiary records reviewed and their method of determining the sample.
Enactment of the TANF block grant in 1996 significantly changed federal welfare policy, as it both limited HHS’s authority to regulate welfare programs and gave states more flexibility in designing and funding related programs. The TANF block grant is a $16.5 billion per year fixed federal funding stream to states, which is coupled with a maintenance-of-effort (MOE) provision that requires states to maintain a significant portion of their historic financial commitment to their welfare programs. TANF gave states flexibility in setting various welfare program aspects, such as cash assistance benefit levels and eligibility requirements, as well as in spending TANF funds. For example, when the number of families receiving cash assistance benefits declined after welfare reform, states were able to use available funds to enhance spending for noncash services, such as child care, work supports, and a range of other supports for low-income families. Due to these flexibilities, TANF programs differ substantially by state. Further, because of differences in state administration of the program, some state TANF programs also differ by local jurisdiction. In creating the TANF block grant, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA) set out to increase the flexibility of states in operating a program designed for the following four purposes: 1. providing assistance so that children could be cared for in their own homes or in the homes of relatives; 2. ending families’ dependence on government benefits by promoting job preparation, work, and marriage; 3. preventing and reducing the incidence of out-of-wedlock pregnancies; 4. encouraging the formation and maintenance of two-parent families. In line with the second purpose, PRWORA (1) established work participation rates as a requirement for states, which HHS uses to measure performance; (2) named 12 categories of work activities to be counted for the purpose of the measure; and (3) defined the average number of weekly hours that each family receiving TANF cash assistance must be engaged in an activity to count as participating. If TANF recipients engage in other activities provided or permitted under the state’s TANF program, then those activities do not count toward meeting the federal work participation requirements. In addition, TANF recipients who engage in work activities for less than the minimum required number of hours each week generally do not count as being engaged in work for purposes of the requirements. PRWORA also excluded some families from these work requirements, such as those in which children alone receive the cash assistance benefits. PRWORA established separate annual work participation rates for all families and all two-parent families receiving TANF cash assistance in each state. Although the required rates increased in the immediate years following TANF implementation, when they reached their maximums, the rates were set at 50 percent for all TANF families and 90 percent for two- parent TANF families. In short, these rate requirements mean that states are held accountable for ensuring that generally at least 50 percent of all families receiving TANF cash assistance participate in one or more of the 12 work activities for an average of 30 hours per week. However, the act also allowed states to annually apply for a reduction to the required work participation rates through the caseload reduction credit. This credit was annually calculated by determining the change in caseload—or, the average number of families receiving cash assistance— in the state between fiscal year 1995 and the fiscal year preceding the current one. If a state’s caseload has decreased, the credit allows the state to decrease its required work participation rate by the equivalent percentage. For example, if a state’s caseload decreased by approximately 20 percent between fiscal year 1995 and the fiscal year preceding the current one, the state would receive a caseload reduction credit equal to 20 percent, which would result in the state having an adjusted work participation rate requirement of 30 percent for the current year. Because TANF caseloads significantly declined following TANF implementation, this credit enabled many states with fewer than 50 percent of their TANF families sufficiently engaged in countable work activities to still meet the federal work participation rates. (See fig. 1.) In addition, states could modify the calculation of their work participation rates through funding decisions. Specifically, because PRWORA’s work participation requirements only applied to families receiving cash assistance funded with TANF block grant dollars, states could opt to use their MOE dollars to fund cash assistance for families less likely to meet their individual work participation requirements. By creating these MOE- funded separate state programs (SSP), states were able to remove selected families from the work participation rate calculation. (See fig. 2.) PRWORA established penalties for states that did not meet their required work participation rates and gave HHS the authority to make determinations regarding these penalties. When a state does not meet its required level of work participation, HHS will send the state a penalty notice. The state can accept the penalty, which reduces its annual block grant, or it can try to avoid the penalty. To do so, a state can opt to provide reasonable cause as to why it did not meet the work participation rate or submit a corrective compliance plan that will correct the violation and demonstrate how the state will comply with work participation requirements. In addition, if the state’s failure to meet the work participation rate is due to circumstances that caused the state to become a “needy state” or extraordinary circumstances such as a natural disaster, HHS has the discretion to reduce a state’s penalty. In 2006, DRA reauthorized the TANF block grant through fiscal year 2010 and made several modifications that were generally expected to strengthen TANF work requirements intended to help more families’ attain self-sufficiency, and improve the reliability of work participation data and program integrity. Specifically, DRA directed HHS to issue regulations by June 30, 2006, defining the 12 work activities, methods for reporting and verifying hours of work participation, and the circumstances under which a parent who resides with a child receiving assistance should be included in work participation rates. DRA also required (1) states to establish and maintain procedures consistent with the new regulations and (2) HHS to review these procedures to ensure they will provide an accurate measure of work participation. Further, DRA mandated that families receiving cash assistance through SSPs be included in the calculation of work participation rates, and it changed the caseload reduction credit by moving the base year for measuring caseload declines from 1995 to 2005. In addition to the work requirement changes, DRA also added a provision allowing states to count a broader range of their own expenditures toward the TANF MOE requirement. Previously, states could claim as MOE their expenditures related to the four purposes of TANF that provided benefits or services only to financially needy families with children. However, DRA expanded states’ ability to count as MOE other expenditures on TANF purposes 3 and 4—the prevention and reduction of out-of-wedlock pregnancies and the formation and maintenance of two-parent families. Specifically, the act allowed states to count their total expenditures toward these purposes, regardless of the composition and financial need of the families benefiting from these expenditures. HHS issued interim final regulations in response to DRA on June 29, 2006, which were generally applicable beginning in fiscal year 2007. These regulations addressed the changes related to work rules required by DRA, such as issuing federal definitions of the 12 work activities, and also required specific state actions. For example, in response to the DRA requirement that states establish procedures for counting, verifying, and reporting work participation, HHS required states to submit an interim Work Verification Plan to the agency by September 30, 2006, and have a final approved version in place by September 30, 2007. The interim regulations also addressed the DRA change to MOE spending on pro- family activities by clarifying that states could claim as MOE all spending reasonably calculated to address TANF purposes 3 and 4. HHS issued the final DRA-related regulations on February 5, 2008, which were effective beginning in fiscal year 2009. Although the final regulations made some modifications to the work rules included in the interim final regulations, HHS officials reported that these modifications were generally minor. For example, HHS clarified that some activities not directly addressed in the interim final regulations fit within specific work activity definitions. In contrast, the final regulations made a significant change to the interim regulations related to allowable MOE expenditures on pro- family activities. Specifically, under the final regulations, states can count toward MOE their total spending on specific pro-family activities listed in the healthy marriage promotion and responsible fatherhood section of DRA, rather than their total spending on all pro-family activities under TANF purposes 3 and 4. For the specified activities alone, a state can count all of its expenditures toward MOE regardless of the family composition and financial need of the people benefiting from these activities. In response to the economic recession that began in 2007, the Recovery Act made several additional changes to TANF, which generally did not affect the federal work rule changes required by DRA. Specifically, the Recovery Act created the $5 billion Emergency Contingency Fund for state TANF programs, which states can qualify for based on increases in the number of families receiving cash assistance or in TANF and state MOE expenditures for short-term, nonrecurrent benefits and subsidized employment. States can apply for funds each quarter through the end of fiscal year 2010, and they are eligible to have 80 percent of their expenditure increases reimbursed from the fund. In total, each state is eligible for a portion of the fund equal to up to half of its annual basic TANF block grant, as long as dollars remain. Because these funds are afforded the same flexibilities as the TANF block grant, Emergency Contingency funds can be spent on any TANF-related purpose for TANF- eligible families. Before the creation of the Emergency Contingency Fund, PRWORA had originally created a TANF Contingency Fund of up to $2 billion that states could access in times of economic distress. States have to meet criteria to qualify for the TANF Contingency Fund that differ from those for the Emergency Contingency Fund, and only a portion of the TANF Contingency Fund had been drawn down by the states when the recent economic recession began in 2007. The Recovery Act also made two additional funding modifications to TANF, as well as a temporary modification to the caseload reduction credit. First, the Recovery Act extended TANF supplemental grants, which amounted to $319 million, to qualified states through fiscal year 2010. Beginning with PRWORA, annual supplemental grants had been awarded to states that had historically low welfare spending per person and high population growth, but these grants were due to expire at the end of fiscal year 2009. In addition, the Recovery Act increased states’ flexibility by permitting them to spend prior year TANF block grant funds on all TANF- allowable benefits and services. Prior to this modification, states had been permitted to spend prior year TANF block grant funds only on assistance—a category that includes cash benefits and supportive services for families receiving these benefits. Finally, the Recovery Act also modified the caseload reduction credit calculation for fiscal years 2009- 2011, by allowing states the option to use the lower total number of cash assistance recipients in their state in fiscal year 2007 or fiscal year 2008 as the comparison caseload for calculating the credit. For example, if a state had 20,000 families receiving TANF cash assistance in fiscal year 2007, and 21,000 such families in fiscal year 2009, it could opt to use 20,000 for the purposes of calculating its fiscal year 2010 caseload reduction credit, resulting in a greater credit and a lower required work participation rate. Since DRA, national TANF work participation rates have changed little, although the rates reflect both recipients’ work participation and state policies that affected the work participation rate calculation. Specifically, the factors that influenced the calculation of a state’s work participation rate included the number of families receiving TANF cash assistance who participated in work activities, changes in the number of families receiving TANF cash assistance, state spending on TANF-related programs in excess of what is required, state policies that keep working families in the rate calculation, and state policies that keep nonworking families out of the rate calculation. In addition, in order to comply with DRA, states made other changes to their TANF programs, which may also have affected their work participation rates. Although HHS provided guidance to states after DRA, states reported differing opinions about the usefulness of this assistance, as well as continued challenges implementing certain aspects of DRA’s changes to the TANF work requirements. Nationally, the proportion of families receiving TANF cash assistance who met their individual work requirements by participating in one of 12 work activities for a minimum number of hours each week changed little after DRA, as did the types of work activities in which they most frequently participated. In fiscal year 2007 and fiscal year 2008—the two years following DRA for which national data are available—between 29 and 30 percent of families receiving TANF cash assistance met their work requirements. Similarly, between 31 and 34 percent of families receiving TANF cash assistance met their work requirements in each year from fiscal year 2001 to fiscal year 2006. In other words, approximately 295,000 of the 875,000 families receiving TANF cash assistance who had work requirements in fiscal year 2005 met those requirements, and 243,000 of 816,000 families met their work requirements in fiscal year 2008. The small decrease in the proportion of families that met their requirements after DRA may be related, in part, to the federal work activity definitions and tightened work hour reporting and verification procedures states had to comply with after the act, as well as states’ ability to make the required changes. The types of work activities in which families receiving TANF cash assistance most frequently participated were also similar before and after DRA. For example, among families that met their work requirements, the majority participated in unsubsidized employment in the years both before and after DRA. In all of the years analyzed, the next most frequent work activities were job search and job readiness assistance, vocational educational training, and work experience. While the national proportion of TANF families who were sufficiently engaged in countable work activities did not significantly change after DRA, fewer states met the required work participation rates for all TANF families and for two-parent TANF families. This is in part because other factors, including states’ policy and funding decisions, affected states’ ability to meet the required rates after DRA. Specifically, after DRA, in fiscal years 2007 and 2008, 13 and 10 states, respectively, did not meet at least one of the required rates, compared with a maximum of 4 states that did not meet at least one of the rates in each year between fiscal ye ars 2001 and 2006, according to HHS data (see table 1). States that do not meet the rates may receive a penalty reducing their annual block grants; however, HHS has not yet finalized state penalties for the two years following DRA. Fewer states met the federally required work participation rates after DRA in part because of a modification that DRA made to the caseload reduction credit. Specifically, DRA changed the calculation of this credit, which adjusts the required work participation rates, so it now compares the change in the number of families receiving cash assistance in each state between the fiscal year 2005 base year and the comparison year. Before DRA, the credit’s base year was fiscal year 1995 and states had larger caseload reduction credits because of the dramatic declines in the number of families receiving cash assistance after TANF implementation. For example, in fiscal year 2006, states’ caseload reductions ranged from 11 to 91 percent, and 18 states had reductions that were at least 50 percent, which reduced their required work participation rates to 0. However, in part because of the base year change, caseload reductions had less of an effect on states’ ability to meet the required work participation rates after DRA. Specifically, after DRA in fiscal year 2007, 3 states could not claim a credit related to caseload reduction, and other states had much smaller caseload reductions than they had before DRA. For example, 25 states had caseload reductions ranging from 1 to 5 percent, and the remaining 23 states had caseload reductions from 6 to 26 percent. As a result, only 8 states met the all families work participation rate in fiscal year 2007 solely because of their caseload reductions and the number of families who were sufficiently engaged in countable work activities, although 9 additional states met the rate solely because 50 percent or more of their families were sufficiently engaged in countable work activities. Although caseload reductions were significantly smaller after DRA, some states increased their caseload reduction credits and their ability to meet the federally required work participation rates by claiming excess MOE expenditures. Specifically, states are required to spend a certain amount of state MOE funds every year in order to receive their federal TANF block grants. However, if states spend in excess of the required amount, they are allowed to reduce the number of families included in the calculation of their work participation rates through the caseload reduction credit calculation (see fig. 3). HHS officials told us that, prior to DRA, Delaware alone had claimed these expenditures toward its caseload reduction credit. In contrast, in fiscal year 2007, 32 states claimed excess MOE expenditures toward their caseload reduction credits. Further, of the 39 states that met the all-families work participation rate in fiscal year 2007, 28 claimed excess MOE expenditures toward their caseload reduction credits, and 22 would not have met their rates without claiming these expenditures (see fig. 4). Among the 22 states that needed to rely on excess MOE expenditures to meet their work participation rates, most relied on excess MOE expenditures to add between 1 and 20 percent to their caseload reduction credits, but 4 states relied on excess MOE expenditures to add between 25 and 35 percent to their credits. (See fig. 5.) In fiscal year 2008, 30 of the 44 states that met the all-families work participation rate claimed excess MOE expenditures toward their caseload reduction credits, and 14 would not have met their rates without claiming these expenditures. Although the majority of states reported excess MOE expenditures after DRA, which helped some states to meet work participation rates, we did not determine whether these increases reflect new state spending or spending that had been occurring before DRA but was not reported as MOE. Specifically, we did not examine the totality of state expenditures on TANF-related programs and services in the years before and after DRA, which would have provided this information. However, we did examine states’ TANF and MOE expenditures reported to HHS before and after DRA to further understand these increases. Total state MOE expenditures increased by almost $2 billion between fiscal years 2006 and 2008, from $12.0 to 13.7 billion, respectively. In addition, this increase appears to be related to state spending on programs and services referred to as pro-family by DRA—the prevention and reduction of out-of-wedlock pregnancies and the formation and maintenance of two-parent families (see table 2). Although federal regulations have allowed states to count spending on these types of programs and services as MOE since TANF was implemented, interim DRA regulations allowed states to count additional expenditures in this area as MOE for fiscal years 2007 and 2008, including those that were not directed at low-income families with children. For example, according to the National Conference of State Legislatures, some states counted a broad range of spending under these categories, including afterschool and pre-kindergarten programs and juvenile justice services. Although final DRA regulations modified states’ ability to report all of these expenditures as MOE beginning in fiscal year 2009, state MOE expenditures on pro-family activities did not significantly decrease in that year. Some states made other policy changes to their TANF programs after DRA that may have affected their work participation rates. For example, many states use several types of policies to ensure that families complying with their individual work requirements are included in the calculation of the state’s work participation rate, such as worker supplement and earned income disregard policies. Because these families are meeting their TANF work requirements, including them in the rate calculation can improve the state’s rate. For instance, worker supplement programs are used by some states to provide monthly cash assistance to low-income working families who were previously on TANF or about to lose TANF eligibility because their incomes were too high. When states fund these programs with TANF or MOE dollars to help meet families’ ongoing basic needs, families receiving these benefits are included in the calculation of the state’s work participation rate. On our survey, 23 states reported that they provide worker supplement cash assistance programs, and 18 of these states implemented these programs since fiscal year 2006. In the majority of states with these programs (15), the average cash assistance benefit provided to each family in the worker supplement program is less than the average TANF cash assistance benefit. Further, states with these programs allow families to receive these benefits for a maximum of 1 to 60 months, with a median of 7.5 months. Like worker supplement programs, earned income disregards encourage families receiving TANF cash assistance to work. However, instead of providing additional cash benefits to working families, these policies disregard part of a family’s earned income when the state determines the amount of monthly TANF cash assistance the family receives. Forty-nine states reported on our survey that they have earned income disregards, and 10 of these states have made changes to these policies since fiscal year 2006. Specifically, 9 states increased the amount of income disregarded, and 1 began indexing the amount disregarded on an annual basis. No states reported that they had decreased or eliminated their earned income disregards since fiscal year 2006. In contrast, states also made policy changes to their TANF programs after DRA that removed certain families from the calculation of states’ work participation rates. Specifically, some states opted to fund cash assistance for low-income families with state dollars not reported as MOE, known as solely state funds (SSF). While DRA required that the calculation of a state’s work participation rates include families receiving cash assistance funded with MOE dollars—a group that had previously been excluded— states are able to still exclude certain families from their rate calculations by using SSFs to serve them. (See fig. 6.) According to several state TANF administrators who responded to our survey and officials we interviewed during our Oregon site visit, families for whom states use SSFs to provide cash assistance are those that typically have the most difficulty meeting the TANF work requirements. For instance, Oregon used SSFs to provide cash assistance to families applying for TANF that included a parent with disabilities. Oregon officials said that parents with disabilities are often unable to meet their TANF work requirements, and, with this program, the state instead provides case management and assistance with applying for Supplemental Security Income. Similarly, one state TANF administrator responding to our survey reported that they use SSFs to provide cash assistance to several types of low-income families, which is necessary both for the state to remain in compliance with TANF work participation rates and to maintain or try new policies that might otherwise negatively impact the state’s rates. Further, another state TANF administrator responding to our survey reported that individual counties decide whether to use SSFs to provide cash assistance to families receiving such assistance in that state, and these staff take into account both families’ needs and their ability to meet TANF work requirements when making that decision. In total, 29 states reported through our survey that they fund cash assistance for certain low-income families with SSFs, and almost all of these states first began using SSFs for this purpose after DRA. Almost all of those states (28) use SSFs to provide cash assistance to low-income, two-parent families, and almost half (14) use SSFs to provide cash assistance to low- income families with significant barriers to employment, such as families with a disabled member or recent immigrants and refugees. Some states also use SSFs to provide cash assistance to families enrolled in postsecondary education and other types of families, such as those who have received 60 months of TANF-funded cash assistance and those with children under age 1 or 2. (See fig. 7.) Overall, states reported using SSFs to serve a range of less than 1 percent to 50 percent of their total number of families receiving cash assistance. Because SSFs are not connected to the funds for states’ TANF programs, states can develop their own work participation rules for families served with SSFs. In addition, if families served through SSFs do not meet the work requirements established by the state, they do not affect the state’s TANF work participation rates. In all states that use SSFs to provide cash assistance to two-parent families, and in the majority of states that use SSFs to provide cash assistance to families enrolled in postsecondary education, work participation rules for families served through SSFs are generally the same as for families served through the state’s TANF program. In contrast, in 9 of the 14 states that provide cash assistance with SSFs to recipients with significant barriers to work, work participation rules are generally not the same for these families as for families in the state’s TANF program. Through other policy choices, states can similarly exclude certain families from their work participation rates. For example, some states have diversion programs that can reduce the number of families included in the calculation of their rates. Because diversion programs provide eligible low-income families with short-term, nonrecurrent cash benefits and support services in lieu of TANF cash assistance, families participating in these programs are not included in states’ work participation rates. Thirty- one states reported through our survey that they have a statewide diversion program, and 14 states had made at least one change to these programs since fiscal year 2006. Of these 14 states, 11 made at least one change that may have expanded the use of diversion in their states since DRA, including implementing a program, significantly increasing the number of families receiving support, increasing the types of support provided through the program, or increasing the maximum amount of the cash benefit. Conversely, 6 made at least one change that may have reduced the use of diversion in their states. These changes included eliminating the program, significantly decreasing the number of families receiving support, and decreasing the maximum amount of the cash benefit. Some states also made changes to their TANF sanction policies after DRA, which, like diversion programs, may reduce the number of families included in the calculation of states’ work participation rates. Such policies reduce or remove a family’s TANF cash assistance benefits when they are not complying with their individual work requirements. At the time of our survey, 27 states reported that they remove a family’s entire cash assistance benefit the first time that the family does not comply with work requirements, and 4 of those states had changed to a full family sanction policy from one that sanctioned fewer family members, since fiscal year 2006. While a total of 13 states reported that they had made at least one change to their sanction policies since fiscal year 2006, a similar number of these states reported making changes toward a more strict sanctioning policy as did toward a less strict sanctioning policy. It is likely that many factors, including DRA and other state TANF program characteristics, influenced state changes to these policies after DRA. As a result of the various factors that affect the calculation of states’ work participation rates, the work participation rate does not allow for clear comparisons of state TANF programs. In short, each state’s ability to meet the required work participation rates reflects not only the number of its TANF families sufficiently engaged in countable work activities but also changes in the number of families receiving TANF cash assistance in the state and the state’s policy choices that (1) lower their required work participation rates, (2) keep working families in the calculation of their rates, and (3) remove certain families from the calculation of these rates. In addition, these factors make it difficult to evaluate individual states’ performance, or the influence of these individual factors, both before and after DRA. After caseload reduction credits (including adjustments related to excess MOE expenditures) were subtracted from the federally required work participation rate of 50 percent for all families receiving TANF cash assistance, some states had to have a much greater proportion of families sufficiently engaged in countable work activities in order to meet their rates after DRA than before, while other states had the opposite outcome. Specifically, when comparing fiscal years 2006 and 2008, 28 states had higher adjusted work participation rate requirements after DRA than before, 15 had lower requirements, and 8 had 0 percent adjusted requirements in both years. For example, according to HHS data, Michigan needed to have 0 percent of its families receiving TANF cash assistance meeting their individual work requirements to meet its all-families work participation rate in fiscal year 2006, and 50 percent of its families meeting the work requirements to meet its rate in fiscal year 2008. This state was directly affected by DRA’s change to the caseload reduction credit base year, as it had over a 50 percent decline in its TANF caseload before DRA but no decline since. In contrast, according to HHS data, Kansas needed to have 39 percent of its families receiving TANF cash assistance meeting their individual work requirements to meet its work participation rate in fiscal year 2006, and 0 percent of its families meeting work requirements to meet its rate in fiscal year 2008. While Kansas had a caseload reduction of 11 percent before DRA, after DRA, the state’s caseload reduction credit was based on a 16 percent reduction in its TANF caseload after fiscal year 2005 and a significant amount of excess MOE expenditures. While some states were able to comply with DRA by making only minimal changes to their TANF programs’ work policies and procedures, many had to make more extensive changes. Several aspects of state TANF programs’ work-related policies and procedures were potentially affected by DRA because it required states to take certain steps to improve the reliability of work participation data and HHS to issue definitions of the 12 work activities. The extent to which each state had to make changes to its TANF program’s work rules and related procedures to comply with DRA was therefore directly related to procedures the state had in place before DRA was passed when all states had significant flexibility over their work definitions, policies, and procedures. Through our site visits and survey, many states reported making changes to their programs to comply with DRA and consequent HHS regulations, and they identified several of the changes as particularly challenging. Specifically, 41 states reported through our survey that they made moderate, great, or very great changes to their processes for reporting and verifying TANF families’ reported hours of work participation to comply with DRA, and 40 reported that they made such changes to their internal controls over work participation data. (See fig. 8.) For example, officials in all three states we visited told us that, to comply with DRA, they needed to develop new processes to track and verify TANF families’ hours of work participation. In addition, through our survey, one state reported that it created a monitoring process to track both internal staff and contractor activities to ensure the state accurately reported and verified work participation hours after DRA. Although still a majority, fewer states reported making moderate, great, or very great changes to their definitions of work activities after DRA. For example, two of the states we visited changed their definitions of the job search and job readiness work activity after DRA, as the definition in HHS regulations now requires these activities to be supervised. In a local office within one of these states, officials discussed how they no longer offer this activity to TANF families because staff are unable to provide the required supervision. The extent to which states had to make changes to comply with DRA work requirements may have affected whether some states met their work participation rates in the years immediately following DRA. For example, during our site visits, officials in Ohio and Oregon both discussed having to make extensive changes to their work rules and procedures after DRA to comply with the federal requirements, while Florida officials generally reported having to make few policy changes to comply. In fiscal years 2007 and 2008, both Ohio and Oregon did not meet their work participation rates for all families receiving TANF cash assistance, while Florida did meet the rate. As required by DRA, HHS issued regulations and guidance that defined work activities and internal control requirements to standardize work participation measurement, but states reported divergent opinions on the extent to which they found HHS assistance useful in implementing the DRA changes. For example, 15 states reported that such assistance was of great or very great use, 20 states reported that it was of moderate use, and 13 states reported that it was of some or no use. Through both our survey and site visits, state officials provided additional information on areas in which guidance was helpful. For example, a few states noted that they appreciated HHS’s assistance after DRA with clarifying procedures states needed to have in place to comply. During our three site visits, the effect of such assistance was evident, as state and local officials we met with all had a clear understanding of the work-related policies and procedures required by DRA. In contrast, other states expressed frustrations with several aspects of HHS assistance since DRA, including the time frames allowed for initially completing their Work Verification Plans, changes the agency made between the interim and final regulations that affected MOE expenditures and work participation reporting, and the timeliness of HHS assistance when questions arose. Although states and localities we visited seem to understand the work- related policies and procedures required since DRA, through our survey, states reported continued challenges implementing these requirements. (See fig. 9.) However, some of these continued challenges are not surprising, as some states had significantly different work definitions, policies, and procedures in place, and lacked internal controls over work participation data, prior to DRA. For example, 38 states reported that they continued to experience a moderate, great, or very great degree of challenge implementing changes to computer systems or databases related to DRA. Some states reported that they continue to lack data systems that efficiently track and verify recipients’ work hours. In all of our site visits, officials discussed related challenges. In Oregon, because the state needed to make various changes to its TANF work activity definitions in order to comply with the definitions in HHS regulations, these changes required significant data system programming. After programming was complete, officials reported that the state used considerable resources to train staff to correctly code TANF families’ work participation, in order to ensure accurate application of these changes. Similarly, in Florida, officials reported that had to make significant changes to the work force data system after DRA in order to capture additional information required by the state’s Work Verification Plan approved by HHS. In Ohio, local staff discussed how the state’s TANF data system is antiquated, slow, and unable to provide useful case management information at the local level. Further, the state is continually updating the system, but it often does not have all of the functions needed for local officials to effectively document information required by DRA within the system. In addition, 36 states reported they continue to experience a moderate, great, or very great degree of challenge verifying participant’s actual work hours, and 32 states reported that they continue to experience the same degrees of challenge implementing daily supervision of work activities. For example, local officials in almost all of the offices we visited told us that verification of TANF families’ work participation requires significant time and collaboration between TANF staff and employers and other staff at work activity sites. Because of this, some noted that they have had to designate or hire specific staff to manage the tracking and verification of families’ work participation, and yet these activities also remain a routine part of all local TANF staff’s responsibilities. Further, some discussed how verification of TANF families’ hours spent in certain work activities is particularly difficult to obtain, such as community college classes for which professors and instructors need to verify attendance and substance abuse treatment for which multiple providers are frequently involved. In addition, one local office discussed how verifying work hours for job search is particularly difficult, such as confirming whether a recipient interviewed for a job. Although the process of verifying work participation was consistently noted as a challenge by those we visited, federal data suggests that a significant group of families receiving TANF cash assistance are not spending any time participating in work activities, which limits the number of families for which staff are having to fulfill this role. Concerning supervision, as previously mentioned, some local officials we met with discussed how the requirement to supervise job search activities is challenging because of the staff resources needed. Over half of the states also reported that they continue to experience a moderate, great, or very great degree of challenge with the classification of core and noncore work activities. In short, federal law limits the weekly hours that a TANF family can participate in 3 of the 12 work activities, which are commonly referred to as noncore activities. In the states we visited, local officials discussed how this distinction makes it more challenging to prepare TANF families for employment and help move them toward self-sufficiency. For example, a local official discussed how TANF adult recipients who lack a high school diploma or certificate of general equivalency face a significant barrier to work. However, the official noted that addressing this barrier is difficult, given the limit on the weekly amount of time they may spend in classes preparing them to obtain such a certificate and count toward their work requirements. Similar to limits on families’ participation in noncore activities, federal law imposes time limits on families’ participation in two of the core work activities—vocational educational training and job search and job readiness assistance—which states report are a challenge. Specifically, 38 states reported that they experience a moderate, great, or very great degree of challenge implementing the time limits placed on certain work activities. Through our survey and site visits, officials reported that the 12- month lifetime limit on vocational educational training and the 6-week general limit on job search and job readiness assistance (with no more than 4 weeks consecutively) are challenging to implement. Although the limits on the amount of time that a state can count these activities as work participation for each family have been in federal law since TANF was created, several state and local officials reported that the time limit on job search and job readiness assistance is particularly challenging now. Specifically, one local official we met with noted that TANF families who have been out of the workforce for an extended period of time often need more than 6 weeks of time in job search and job readiness assistance to Further, another local official also noted remove their barriers to work. that the 12-month lifetime limit on vocational educational training can be problematic because any length of class taken during a month counts as a full month against the TANF family’s eligibility for vocational educational training. For example, according to the official, if a TANF recipient took a 1-day class and no other vocational educational training activities in that month, the recipient would be counted as having 11-months of vocational educational training left for work participation purposes. Officials in two of the states we visited also discussed how, since DRA, local staff place certain families with significant barriers to work in other types of work activities that do not count toward the state’s work participation rate. They indicated that participation in these activities is sometimes necessary to ensure that families successfully overcome their barriers, in part because of limits on related activities included in the federal work activity definitions. Subsequent to DRA, the economy weakened in 2007 and 2008, which affected the number of families receiving TANF cash assistance, as well as many state budgets. Specifically, the number of families receiving TANF cash assistance increased between December 2007 and September 2009, particularly those with two parents. In addition, state and local officials report that the economic recession has decreased TANF resources and challenged TANF service delivery. Since the beginning of the economic recession in December 2007, 37 states had increases in the number of families receiving TANF- and MOE-funded cash assistance benefits, and 13 states had decreases, as of September 2009. Nationwide, the total number of families receiving TANF cash assistance increased by 6 percent between December 2007 and September 2009. (See fig. 10.) Among states with changes in the number of families receiving TANF cash assistance, the degree of change varied, likely due to differences in states’ TANF program characteristics, unemployment rates, and fiscal conditions. For instance, while Kentucky reported a 1 percent increase in families receiving TANF cash assistance between December 2007 and September 2009, Utah reported a 35 percent increase, and Oregon reported a 48 percent increase in such families. In contrast, while four states reported a 1 percent decrease in families receiving TANF cash assistance during this time period, Texas reported a 16 percent decrease, and Vermont reported a 28 percent decrease. As previously discussed, the number of families receiving TANF cash assistance does not include all families receiving welfare cash assistance in every state, as some states provide such assistance through SSFs, and these families are not included in the federal data. States reported through our survey that approximately 82,000 families received cash assistance through SSFs in September 2009 in addition to the 1.8 million families that received TANF cash assistance. However, we did not collect data on changes in the numbers of families receiving cash assistance funded by SSFs, so we do not know the extent to which the total number of families receiving welfare cash assistance has changed during the economic recession. Although the total number of families receiving TANF cash assistance has increased slightly during the current economic recession, the number of two-parent families receiving these benefits has increased at a faster rate. For example, the number of two-parent families receiving TANF cash assistance nationwide increased by 57 percent between December 2007 and September 2009. In comparison, the number of one-parent and child- only families receiving TANF cash assistance nationwide increased by 8 percent and decreased by 1 percent, respectively, during the same time period. (See fig. 11.) All three of our site visit states also experienced the most significant increases in their number of two-parent families receiving TANF cash assistance during the current economic recession. For example, Oregon officials reported that the number of two-parent families receiving TANF cash assistance had risen from 906 families in July 2007 to 2,703 families in September 2009, an increase of almost 200 percent. Similarly, the number of two-parent families receiving TANF cash assistance in Florida increased by approximately 200 percent between December 2007 and December 2009. Local officials in Florida also noted that they have seen an increase in two-parent families receiving TANF who were previously composed of a stay-at-home mother and a working father who had been laid off or lost his business during the current economic recession. Local officials in all three states we visited also reported an increase in the number of TANF applicants who had never before applied for TANF cash assistance—many of whom have higher educational attainment and more job experience than families who applied before the current economic recession. Some of these officials noted that applicants with higher educational attainment and more job experience have been surprised to learn about the extent of the TANF program’s work requirements. For example, officials in one locality reported that because these new TANF recipients are hoping to quickly find new employment, some have resisted the idea of participating in certain available work activities when they did not view those activities as a means to that end. This situation may occur more frequently now, as states and localities cut programs and services, including those related to the 12 work activities, in response to budget constraints. Local officials in two of the three states we visited also reported that some new TANF applicants were former small business owners, who were applying for TANF cash assistance in part because they did not qualify for Unemployment Insurance. Officials in two of the three states we visited said that they expect to see an increase in applicants for TANF cash assistance after the Unemployment Insurance extensions end. Due to the economic recession, many states have faced large budget deficits in 2009 and 2010 that have required states to make difficult budget decisions about the use of state resources for TANF programs. Accordin to the National Governor’s Association and the National Association of State Budget Officers’ “Fiscal Survey of States,” state revenues decreased in fiscal year 2009, with state revenue collections below expectations in 41 states in 2009, compared with 20 states in 2008. As a further indication of declining state fiscal conditions, the “Fiscal Survey of States” reported th in fiscal year 2009, state general fund expenditures declined for the first time since 1983. In our recent report, we found that when the number of families receiving TANF cash assistance rose during the current econo recession, some states decreased TANF spending on family and work supports, while others increased such spending. States that increased this spending did so in part because they were able to draw from other funding streams, but they expressed concern about their ability to continue this as resources dwindle. This is consistent with our previous work, in which we found that when TANF spending for families receiving cash assistance increased, there was an associated contraction in TA NF spending for other forms of aid and services in the states we reviewed. Through their comments in our national survey and during our site visits , state officials discussed how TANF programs and budgets are being affected by state budget constraints related to the economic recession. instance, Oregon’s state budget constraints have decreased the amoun cash payments available to families participating in the state’s Post-TA welfare transition program. This program provides a small amount of monthly cash payments, as well as access to TANF program resources, to TANF clients whose earned income has recently made them ineligible forTANF cash assistance. While the program originally provided recipients with $150 per month in 2007, the payment was decreased to $100 in Ju ly 2009 and will be reduced to $50 in October 2010. In Florida, the stat budget situation has reduced the TANF funds available to support workforce development services for TANF recipients at the same time that such recipients have increased. In one locality that we visited, the budget for these services was approximately $452 per TANF recipient per mont h in 2007-2008, and it was expected to decrease to $157 per recipient per month in 2010-2011, if recipient growth continues at the current rate. Under federal law, states are permitted to retain unspent federal TANF block grant funds for use in future years, giving states the flexibility to draw upon these funds as needed. HHS data show that 33 states utilizedunspent funds, as well as their annual TANF block grant allocations, to cover their TANF-related expenditures in fiscal year 2009. In contrast, 15 states increased their total amounts of unspent TANF funds in fiscal ye ar 2009. While, in every year, an average of 22 states utilize their unspent TANF funds to cover current year expenditures, the number of stat utilizing these funds seems to increase during and after economic recessions. For example, in each of the 3 years following the 2001 recession, 25 to 32 states used unspent TANF funds. Economic recessions es also seem to affect the national unspent TANF fund balance. For instanc between fiscal years 2001 and 2004, the national total of unspent TANF funds decreased by 41 percent. Between fiscal years 2007 and 2009, the national total of unspent TANF funds decreased by 16 percent, though the total increased by 4 percent between fiscal years 2008 and 2009. A total of $3.3 billion unspent TANF dollars remained at the end of fiscal year 200 9. In addition, while some states have had significant reductions in their unspent TANF funds during the current economic recession, others have had significant increases. For example, while Ohio’s unspent TANF funds decreased by $541 million between fiscal years 2007 and 2009, New York’s unspent funds increased by $395 million during the same time period. Through our national survey, state officials expressed concern about federal TANF resources, particularly the long-term viability of the TANF Contingency Fund and the decreasing value of TANF block grant dollars Specifically, state officials indicated their concerns that the Contingency Fund would be depleted before state economic situations improve, wh has since occurred. Although a total of 3 states accessed Contingency Fund dollars between fiscal years 1998 and 2005, 19 states accessed these dollars in one or more years between fiscal years 2008 and 2010. (See fig. 12.) By December 2009, the Contingency Fund was depleted withou additional appropriations having been made to the fund. While the President has proposed additional money for the Contingency Fund in the fiscal year 2011 budget, as of March 18, 2010, it is unknown if the Congress will approve the additional funds. States also expressed concern through our national survey about the fixed amount of the TANF block grant. Th e annual TANF block grant appropriation has remained constant since it was created in 1996, which states report has been particularly challe in times of state budget deficits and increasing numbers of families applying for and receiving TANF cash assistance. In Oregon, state o noted that it would require an additional estimated $100 million to continue providing TANF services at current levels, assuming that the . fficials number of families applying for TANF cash assistance in the state continues to rise at the current rate. In addition to its effects on state budgets and funds for TANF programs, the economic recession has also caused changes to local TANF service delivery in some states. A majority of state TANF officials nationwide, as well as TANF officials from all eight localities we visited, reported that they made changes in local offices’ TANF service delivery because of the economic recession. Specifically, of the 31 states reporting such changes through our survey, 22 had reduced the number of TANF staff, 11 had reduced work hours at offices, and 7 had reduced the number of offices. In contrast, 5 states reported that they had increased the number of TANF staff, 4 had increased work hours at offices, and 1 had increased the number of offices. During our site visits, officials discussed how TANF staff had been reduced through employee attrition without replacement hires, or due to staff transfers from TANF to SNAP. For instance, in one local office in an urban area, 40 staff vacancies remained unfilled, which, combined with increased numbers of TANF applicants, meant that applications took longer to process and were often delayed. In Oregon, although both TANF and SNAP caseloads have increased during the current economic recession, because SNAP increases have been greater, some local TANF staff were temporarily moved to process SNAP applications. Officials in all three states we visited also reported that local TANF caseworkers are now managing an increased number of TANF cash assistance families per person. For instance, in one local office in Florida, officials explained that they hoped to restructure their TANF service delivery model soon, as the increasing number of TANF cash assistance recipients has made their one-on-one caseworker to recipient model difficult to sustain. Under this model, a TANF family is served by the same caseworker for all TANF-related support service needs and self-sufficiency planning. According to the local officials, the one-on-one model was possible when the caseload averaged 58 recipients per caseworker, but it was not designed for the current caseload average of 160 recipients per caseworker. In addition, local officials in one Ohio county reported that their caseworkers’ overall workload has increased because increases in TANF and other public assistance applications have occurred at the same time that staff have left and not been replaced. At present, the county is serving 422 TANF families with a staff of 16 caseworkers. Ten of these caseworkers determine eligibility for TANF, SNAP, and Medicaid, and the remaining 6 are responsible for supporting TANF families’ efforts to meet their work requirements and tracking families’ participation in work activities. In light of the increased ner of fmilie receiving TANF casassnce, te budget deficit, nd ff redction, peer collabortion my help loclitie ddressrrent TANF chllenge. Locl offici in two of the three te we viited cited their prticiption in the HHS Rl Commnitie nd Urban Prtnerhip Inititive as exmple of effective peer collabortion. Throgh thee inititive, the officirticipted in fcilitted collabortion nd ide ring ssion, online nd in-peron, mong TANF offici operting their progr in imilr locreastionwide, nd o received technicassnce from HHS. Thee offici reported tht the ssion were very usefl, nd one noted thdditionssion wold e prticrly usefl now to exchnge ideas nd trtegie for delivering TANF ervice in the crrent economic environment. For innce, one locl officil irrently working with new FedEx nch in her ditrict to coordinte subsidized employment poition for TANF client, based on n ide glened from nother Urban Prtnerhip prticipnt. As a result, local officials in all three of the states we visited expressed their concerns that, as state and local resources tighten and caseloads continue to rise, staff are less able to provide services to meet TANF cash assistance families’ needs and move them toward self-sufficiency. According to local officials in Oregon, caseload increases and staff reductions sometimes result in prioritization of TANF services. For example, one district diverted caseworkers to process new applications, leaving fewer staff available to work directly with TANF recipients. Before the recession, all families receiving TANF cash assistance worked with a caseworker to develop and implement a self-sufficiency plan. However, due to budget constraints, the district prioritized the TANF families that receive direct caseworker support, focusing on new TANF families, families who are actively participating in the program, and families in crisis situations. Local officials in all three states we visited also reported that caseworkers’ abilities to provide families with the supports they need to move toward self-sufficiency has been further challenged by reductions to TANF support services, such as domestic violence programs and transportation assistance. Officials noted that such cuts to services have particularly challenged their abilities to serve clients with significant barriers to work. While officials in one Oregon locality noted that they have been able to maintain some of their support services through local partnerships, officials from another locality in that state have had to reduce mental health and substance abuse support services. These officials noted that this was a difficult cut to make, as reductions in these services can lead to challenging and potentially deadly outcomes in the current economic environment, as unemployed families may be more likely to leave mental health and substance abuse issues untreated. Additionally, some TANF officials stated that certain characteristics of the TANF work activity definitions and work participation verification requirements limit their flexibility to help TANF recipients reach self- sufficiency in the current economy. During our three site visits, local officials indicated that, in their experience, the current time limits on vocational educational training and job search and job readiness assistance are too short to prepare workers for new industries and careers, which may be necessary in the current economy. With national unemployment at 9.7 percent as of January 2010, officials commented through our site visits and survey that TANF recipients are encountering increased competition for all jobs, including low-wage, low-skill positions previously held by some TANF recipients. This increased job competition poses a particular challenge as states try to meet their work participation rates in the current economy, as unsubsidized employment has consistently been the most frequently reported work activity for TANF recipients. In addition, state and local officials reported that the work participation verification procedures required by DRA have been particularly challenging recently, due to the increased workloads of TANF staff. In response to the economic recession, the Recovery Act authorized additional federal funding for state TANF programs, which most states had applied for as of March 2010. States reported primarily using these funds to cover increased cash assistance costs and to maintain their TANF programs. However, states report some challenges applying for Recovery Act TANF funds, as well as concern about their TANF programs after the funds run out. In response to the recent economic recession, the Recovery Act’s $5 billion Emergency Contingency Fund for state TANF programs has provided additional federal funding to qualifying state TANF programs that have had increases in the number of families receiving cash assistance or in two specific types of expenditures. As of March 12, 2010, 46 states, including the District of Columbia, had applied for the Recovery Act’s Emergency Contingency Fund since it was created in February 2009. In addition, almost all states reported through our survey that they plan to apply for the fund in the future. As of March 18, 2010, HHS had awarded $1.8 billion of this fund to 42 of the states that applied, with almost half of this amount awarded to 36 states because of increases in families receiving cash assistance. States also have been applying for and receiving funds related to the two types of expenditure increases that qualify for the fund. Specifically, 40 percent of the total funds awarded to date were provided to 21 states because of their increases in short-term, nonrecurrent benefit expenditures, and 13 percent of all awarded funds were provided to 27 states because of their increases in subsidized employment expenditures. (See fig. 13.) Further, 11 states had received Recovery Act TANF funds related to expenditure increases in all three areas. Almost half of the Recovery Act TANF funds already awarded have been expended by states. States report that they have used Recovery Act TANF funds primarily to maintain their programs and cover increased cash assistance recipients, in part because many states’ budgets have been stretched during the recent economic recession. For example, of the states that applied for these funds, 24 reported through our survey that they are using the funds to cover increased cash assistance costs, and 18 reported using them to fill TANF budget gaps caused by the recent economic recession, such as those for noncash services. Seventeen of these states reported using them for both purposes. In addition, other states reported that they were considering using the funds for these purposes at the time of our survey. (See table 3.) During each of our three site visits, state officials discussed how Recovery Act TANF funds were allowing them to pay for increased cash assistance costs and maintain their TANF programs. For example, in Florida, these funds allowed the state to avoid certain TANF program budget cuts to services other than cash assistance that had been under consideration before the Recovery Act was enacted. The state had been considering such cuts because of the need to direct more of its TANF funds to pay for the increasing number of families receiving cash assistance benefits—a number that increased by 28 percent between December 2007 and December 2009. Similarly, Oregon officials discussed how these funds had allowed their state to avoid additional TANF program cuts that had been under consideration. These proposed cuts were to several supports aimed at helping TANF families move toward self- sufficiency, including a $10 million decrease in the state’s workforce development services for TANF recipients and eliminating the state’s case management program for TANF families at risk of entering the child welfare system. Some states have also used Recovery Act TANF funds to expand existing or create new programs or services for low-income families, including short-term, nonrecurrent benefits and subsidized employment positions. Specifically, 10 states reported through our survey that they are using these funds to expand existing programs, and 10 states also reported using the funds to create new programs. Additional states reported that they were considering using the funds for these purposes at the time of our survey. (See table 3.) Two of the three states we visited were considering expanding or creating new programs or services for low-income families at the time of our visits. Although Recovery Act TANF funds can be used for any TANF-eligible program or service, these two states were focusing on one of the areas specifically targeted by Recovery Act TANF funds— subsidized employment. For example, Florida officials were in the process of working with the state’s regional workforce boards to create new subsidized employment opportunities for low-income families across the state. We visited one such work site in Marion County, at which low- income parents were processing SNAP applications at a call center. This center was established in direct response to the economic recession, both in its location and its type of employment. Specifically, Marion County has one of Florida’s highest unemployment rates, and the center was created shortly after the closure of a mortgage-processing firm that employed call agents in the area. Further, the center provided needed assistance with processing new SNAP applications, a program that has seen a 183 percent increase in the number of households receiving these benefits in Florida during the recent economic recession. In addition to Recovery Act TANF funds, local officials we met with during our three site visits reported that Recovery Act funds directed to certain other federal programs have also benefited families applying for and receiving TANF cash assistance. Specifically, the Recovery Act allocated almost $300 million to states to help cover administrative costs associated with the increased numbers of SNAP applicants and recipients. In localities where determination of a family’s eligibility for SNAP and TANF are handled by the same case workers, as they are in Florida, these funds have helped localities manage the increased numbers of applicants and recipients for both programs through the employment of temporary staff, overtime pay, and other staffing options. The Recovery Act also allocated $1.2 billion for Workforce Investment Act of 1998 (WIA) youth activities, including summer employment. These funds are directed toward providing work experience opportunities to low-income youth age 24 and under, and they can also be used by localities for activities such as tutoring and study skills training, occupational skills training, and support services. In two of the states we visited, local officials discussed how the Recovery Act WIA funds used for summer employment had benefited some of their TANF recipients by providing opportunities for these recipients to gain work experience and fulfill their TANF work requirements. In addition to creating the Emergency Contingency Fund, the Recovery Act also extended TANF supplemental grants to states through fiscal year 2010 and increased states’ flexibility to spend prior year TANF block grant funds. However, state officials we surveyed and interviewed did not mention modifying their programs in response to these changes. Further, although the Recovery Act also modified the caseload reduction credit calculation for fiscal years 2009-2011, because those credits have yet to be determined by HHS, the effect of that change is currently unknown. Although HHS has provided ongoing guidance since April 2009 to help states access and utilize Recovery Act TANF funds, some states reported being challenged by a lack of guidance in certain areas. HHS issued initial implementation guidance shortly following the creation of the Emergency Contingency Fund and then continued to issue multiple program instructions and other types of guidance, such as a new data collection form, throughout 2009 and into 2010. Further, HHS officials provided related technical assistance directly to states through conference presentations, teleconferences, and one-on-one phone calls. While most states reported that HHS assistance with applying for and utilizing Recovery Act TANF funds had been useful, some expressed frustration with the amount of time it had taken to receive guidance and responses to questions. For example, throughout the beginning of 2009, HHS had provided states with limited guidance on allowable short-term, nonrecurrent benefit and subsidized employment expenditures. A senior HHS official explained that the department had not anticipated the range of questions states would have about qualifying subsidized employment and short-term, nonrecurrent benefit expenditures, and therefore it took several months to work with the department’s lawyers to ensure an accurate and consistent response was provided to all states. When completing our survey, two states mentioned that examples of allowable expenditures would be helpful, and Florida officials we met with during our site visit discussed how the lack of early guidance on subsidized employment was a challenge. Specifically, Florida state officials participated in HHS-initiated conference calls about subsidized employment expenditures and submitted questions directly to HHS, but the department took longer than anticipated to respond. As a result, Florida moved forward with its Marion County subsidized employment project in October, though the state was not assured that the project qualified for Recovery Act TANF funds until December. However, in November and December 2009, HHS issued examples of allowable short- term, nonrecurrent benefit expenditures and additional guidance on allowable subsidized employment expenditures, and during our site visit, Florida officials indicated that the new subsidized employment guidance had been particularly helpful. States have also been challenged by certain requirements related to accessing the Emergency Contingency Fund. For example, a few states reported through our survey that the requirements for qualifying for the fund should be more flexible. For example, some states may be challenged by the requirement that they can qualify for the fund only after increases in the number of families receiving cash assistance or in expenditures on short-term, nonrecurrent benefits or subsidized employment. While over two-thirds of states have experienced increases in the numbers of families receiving TANF cash assistance during the economic recession, due to various factors, some of these increases have been relatively small, and other states have experienced no increase. In addition, states had limited experience with short-term, nonrecurrent benefits and subsidized employment prior to 2009, which clarifies why they had questions for HHS about allowable expenditures in these areas. Specifically, 1 to 2 percent of all TANF expenditures were directed to short-term, nonrecurrent benefits, and less than 1 percent to work subsidies, in the fiscal years we analyzed between 2001 and 2008. Further, less than 1 percent of all work-eligible TANF cash assistance recipients participated in subsidized employment in fiscal years 2007 and 2008—the two most recent years for which data are available. Some states also report that the Emergency Contingency Fund’s reimbursement level is a challenge in the current economic environment. Specifically, states are reimbursed for 80 percent of allowable expenditure increases from the fund. Two of the states we visited, and a few states through our survey, reported that this reimbursement level is challenging because of current state budget constraints caused by the economic recession. For example, officials from two of the states we visited reported that, while they would like to access the Recovery Act TANF funds to provide subsidized employment opportunities and believe those would benefit low-income families in their states, their current state budgets are so tight that the funds for 20 percent of these expenditure increases are unavailable. As previously discussed, while some states continue to have unspent federal TANF funds that they could potentially use to fund 20 percent of the expenditure increases in these areas, other states have had significant decreases in their unspent fund balances. At the time of our visits, officials in these two states were pursuing outside funding sources, such as local governments and private entities, to help fund subsidized employment positions. According to HHS officials, the department has been working with states to improve their understanding of the various potential sources of funding they can use to qualify for Recovery Act TANF funds. Finally, states also reported concerns about the expiration date for the Emergency Contingency Fund, which is currently September 30, 2010. For example, some officials expressed concerns about the start-up time associated with creating new short-term, nonrecurrent benefit programs and subsidized employment positions and questioned whether there would be time left to draw down Recovery Act TANF funds for those supports once they were created. In addition, states that have been relying on these funds to maintain their TANF programs likely have concerns about the effect on their TANF programs when the Recovery Act TANF funds are no longer available. As previously noted, all three of the states we visited, as well as many states nationwide, have used these funds to avoid cuts and related policy changes to their programs. For example, according to state officials, when the Oregon state legislature passed its current biennial budget in the summer of 2009, it assumed that the state would be able to access most of the Recovery Act TANF funds available to the state, to avoid making further cuts to the state’s TANF program. Because these funds are set to expire, however, they are a temporary solution, and states will likely still face these budget deficits in future years. During our site visits and through our survey, several TANF officials expressed their hopes that the federal government will modify the expiration date for the Emergency Contingency Fund and allow states to access any remaining funds through fiscal year 2011. Related to this, the President’s fiscal year 2011 budget request recommends extending the fund’s expiration date to September 30, 2011, and the House of Representatives approved a bill in March 2010 that included this extension. The budget request also addressed several other state concerns by proposing adding $2.5 billion to the fund, counting new types of expenditure increases for which states can qualify for the fund, and allowing states to be reimbursed for 100 percent of their subsidized employment expenditure increases. States have taken advantage of the various policy and funding options available to adjust their TANF work participation rates since DRA. As a result, while measuring work participation of TANF recipients is key to understanding the success of state programs in meeting one of the federal purposes of TANF, whether states met federal work participation rates after DRA provides only a partial picture of state TANF programs’ effort and success in engaging recipients in work activities. Although the DRA changes to TANF work requirements were expected to strengthen the work participation rate as a performance measure and move more families toward self-sufficiency, states’ use of the modifications currently allowed in federal law and regulations, as well as states’ policy choices, have diminished the rate’s usefulness as the national performance measure for TANF. In addition, state and local officials have found the work participation rate measure particularly challenging during the recent economic recession, as opportunities for employment have become less available, and more families seek assistance from TANF. As many state and local officials face resource constraints during the economic recession, some are making choices to fund basic cash assistance instead of services that may help address families’ movement toward work and long-term self-sufficiency. Given the block grant structure of TANF, its design has not supported significant program expansion during the recent recession; however, Recovery Act TANF funds appear to be helping many states maintain their programs and avoid further funding cuts. Nonetheless, the original TANF Contingency Fund was recently depleted, and states will likely face even more difficult decisions about the future of their TANF programs after the Recovery Act TANF funds expire or run out. It remains to be seen what decisions states will make and how those will affect their programs, as well as how federally defined goals for TANF will be affected, if at all, by the next reauthorization of the TANF block grant. We provided a draft of this report to HHS for review and comment, and a copy of the agency’s written response is in appendix IV. In its comments, HHS did not disagree with our findings and said that the department appreciated our analysis of developments in state TANF programs following DRA and the Recovery Act, as well as the economic context in which states are now operating their TANF programs. However, HHS also suggested that it is incomplete to say that states’ work participation rates after DRA reflect both recipients’ work participation and states’ policy choices, without acknowledging that federal law changed in a number of ways after DRA. We agree, and we believe that the report appropriately acknowledges the DRA changes to TANF, the extent to which states reported having to make changes to their programs to comply with DRA, and the extent to which states reported continuing to be challenged by the DRA changes. Further, the report also directly acknowledges that the extent to which states had to make changes to comply with DRA may have affected whether some met their work participation rates in the years immediately following DRA. HHS also indicated that more inquiry is needed to discern whether states believe that the DRA requirements enhanced their ability to run more effective programs. While we did not directly ask states this question through our state survey, we agree that this would be interesting to know. Finally, HHS also indicated that they have undertaken a major technical assistance effort to help states understand how to access and use the Recovery Act TANF funds. In our interactions with the department during this study, we saw the extent of those efforts, and we agree. As such, while the relevant section of our report is focused more on state TANF programs’ responses to the Recovery Act, it also acknowledges related HHS assistance to states and notes that most states reported finding this assistance useful. However, our findings in this section also address areas in which states continue to be challenged in utilizing the Recovery Act TANF funds, which may help HHS target its assistance efforts moving forward. HHS also provided technical comments on the draft report, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees and to the Secretary of Health and Human Services. The report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To obtain information about changes to state Temporary Assistance for Needy Families (TANF) programs after the Deficit Reduction Act of 2005 (DRA), economic recession, and the American Recovery and Reinvestment Act of 2009 (Recovery Act), we reviewed available TANF data from the U.S. Department of Health and Human Services (HHS), including the number of families receiving TANF cash assistance, work participation rates, federal and state expenditures, and states’ applications for the Emergency Contingency Fund for state TANF programs; conducted a nationwide survey of state TANF administrators; visited three states and selected localities within each state and interviewed officials administering TANF; interviewed officials from HHS and reviewed pertinent federal laws, regulations, and agency guidance; and interviewed researchers knowledgeable about TANF from a range of organizations. We conducted our work from August 2009 to May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Because HHS is responsible for collecting state TANF data and reporting on state TANF programs nationally, we reviewed relevant TANF data compiled by that agency. Specifically, we reviewed published data on (1) the number and types of families receiving TANF cash assistance between fiscal years 1997 and 2009, (2) work participation in fiscal years 2001 and 2005-2008, (3) states that did not meet the work participation rates between fiscal years 2001 and 2008, (4) states’ TANF block grant and maintenance-of-effort (MOE) expenditures in fiscal years 2001 and 2005- 2009, and (5) states’ unspent TANF funds in fiscal years 1997-2009. Because the scope of our work extended to the 50 states and Washington, D.C., we excluded data for the U.S. territories from our analysis. The years of work participation and expenditure data analyzed were selected for two reasons. First, we chose to analyze work participation and expenditure data from fiscal year 2001 because it is an approximate base year between both initial TANF implementation and the enactment of DRA. In addition, we chose to analyze the years immediately preceding and following DRA implementation. In all cases, we analyzed the most recent data available, including preliminary fiscal year 2009 expenditures data HHS provided to us before its public release. While we interviewed HHS officials to gather information on the processes they use to ensure the completeness and accuracy of the TANF caseload, work participation, and expenditures data, we did not independently verify these data with states. In addition, although HHS does not perform on-site reviews of states’ TANF data, auditors in each state periodically review state TANF programs, including administrative data, to comply with the Single Audit Act of 1984. Because of these reviews, as well as the steps taken by HHS officials to ensure the completeness and accuracy of these data, we determined that they were sufficiently reliable for the purposes of this report. We also reviewed selected documents submitted by states to HHS, which the agency does not publish. These included states’ (1) caseload reduction credit reports for fiscal years 2007 and 2008 that had been approved by HHS and (2) applications for the Emergency Contingency Fund for state TANF programs through March 12, 2010. Specifically, we reviewed caseload reduction credit reports to analyze state application of excess MOE expenditures toward their credits after DRA. To better understand recent changes in state TANF programs, we conducted a Web-based survey of state TANF administrators in all 50 states and the District of Columbia. The survey was conducted between November 2009 and January 2010 with administrators from every state and the District of Columbia responding. The survey included questions about: changes made to TANF programs and policies since DRA, challenges related to complying with DRA, cash assistance programs funded with solely state funds, use of the Emergency Contingency Fund for state TANF programs, changes to TANF service delivery related to the economic recession, and HHS assistance to states after DRA and the Recovery Act. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pretesting draft instruments and using a Web-based administration system. Specifically, during survey development, we pretested draft instruments with TANF administrators from four states (Connecticut, Maryland, Minnesota, and Ohio) in October 2009. We selected the pretest states to provide variation in selected state TANF program characteristics and geographic location. In the pretests, we were generally interested in the clarity, precision, and objectivity of the questions, as well as the flow and layout of the survey. For example, we wanted to ensure definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. We revised the final survey based on pretest results. Another step we took to minimize nonsampling errors was using a Web-based survey. Allowing respondents to enter their responses directly into an electronic instrument created a record for each respondent in a data file and eliminated the need for and the errors associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data and make estimations were independently verified to ensure the accuracy of this work. While we did not fully validate specific information that states reported through our survey, we took several steps to ensure that the information was sufficiently reliable for the purposes of this report. For example, we reviewed the responses and identified those that required further clarification and, subsequently, conducted follow-up interviews with those respondents to ensure the information they provided was reasonable and reliable. In our review of the data, we also identified and logically fixed skip pattern errors for questions that respondents should have skipped but did not. In addition, we compared our findings on recent policy changes with information contained in the Urban Institute’s Welfare Rules Database and found that our results were consistent. On the basis of these checks, we believe our survey data are sufficiently reliable for the purposes of our work. To gather additional information about changes to state TANF programs after DRA, the economic recession, and the Recovery Act, we conducted site visits to Florida, Ohio, and Oregon, and selected localities in those states, between September 2009 and December 2009. Specifically, we met with state officials in each state and visited Hillsborough, Marion, and Leon counties in Florida; Franklin and Pike counties in Ohio; and Districts 2, 3, and 4 in Oregon. These three Oregon districts are responsible for TANF administration in the Portland metropolitan area, as well as Benton, Lincoln, Linn, Marion, Polk, and Yamhill counties. We selected these three states because they made varied modifications to their TANF programs after DRA and the Recovery Act, and the number of families receiving TANF cash assistance in these states had increased since the economic recession began. In addition, these states were selected because they varied in geographic location and selected TANF program characteristics, including the maximum amount of TANF cash assistance provided to each recipient family. We worked with the states to select localities that were located in both urban and rural areas to ensure that we captured any related differences in TANF program implementation and work participation. During the site visits, we interviewed state and local administering agency officials. Through these interviews, we collected information on changes to TANF programs, policies, and procedures since DRA and the recession in the economy, strategies employed to comply with DRA, use of funds received from the Emergency Contingency Fund for state TANF programs, challenges related to implementing the TANF program since DRA, and HHS assistance since DRA and the Recovery Act. We cannot generalize our findings beyond the states and localities we visited. As discussed in this report, each state’s ability to meet the required work participation rates reflects not only the number of its TANF families sufficiently engaged in countable work activities, but also changes in the number of families receiving TANF cash assistance in the state, and the state’s policy choices that (1) lower their required work participation rates, (2) keep working families in the calculation of their rates, and (3) remove certain families from the calculation of these rates. See tables 4 and 5 for information on factors that may have affected each state’s ability to meet the all-families work participation rate in fiscal year 2007. Heather McCallum Hahn, Assistant Director, and Rachel Frisk, Analyst-in- Charge, managed this assignment and made significant contributions to all aspects of this report. Karen Febey, Maria Gaona, Jean McSween, and Betty Ward-Zukerman also made important contributions to this report. Susan Aschoff, James Bennett, and Jessica Orr provided writing and graphics assistance, and Alex Galuten provided legal assistance. Temporary Assistance for Needy Families: Implications of Changes in Participation Rates. GAO-10-495T. Washington, D.C.: March 11, 2010. Temporary Assistance for Needy Families: Fewer Eligible Families Have Received Cash Assistance Since the 1990s, and the Recession’s Impact on Caseloads Varies by State. GAO-10-164. Washington, D.C.: February 23, 2010. Support for Low-Income Individuals and Families: A Review of Recent GAO Work. GAO-10-342R. Washington, D.C.: February 22, 2010. Healthy Marriage and Responsible Fatherhood Initiative: Further Progress Is Needed in Developing a Risk-Based Monitoring Approach to Help HHS Improve Program Oversight. GAO-08-1002. Washington, D.C.: September 26, 2008. Welfare Reform: Better Information Needed to Understand Trends in States’ Uses of the TANF Block Grant. GAO-06-414. Washington, D.C.: March 3, 2006. Welfare Reform: More Information Needed to Assess Promising Strategies to Increase Parents’ Incomes. GAO-06-108. Washington, D.C.: December 2, 2005. Welfare Reform: HHS Should Exercise Oversight to Help Ensure TANF Work Participation Is Measured Consistently across States. GAO-05-821. Washington, D.C.: August 19, 2005. TANF AND SSI: Opportunities Exist to Help People with Impairments Become More Self-Sufficient. GAO-04-878. Washington, D.C.: September 15, 2004. Welfare Reform: Information on TANF Balances. GAO-03-1094. Washington, D.C.: September 8, 2003. Welfare Reform: Information on Changing Labor Market and State Fiscal Conditions. GAO-03-977. Washington, D.C.: July 15, 2003. Welfare Reform: Outcomes for TANF Recipients with Impairments. GAO-02-884. Washington, D.C.: July 8, 2002. Welfare Reform: With TANF Flexibility, States Vary in How They Implement Work Requirements and Time Limits. GAO-02-770. Washington, D.C.: July 5, 2002.
The Deficit Reduction Act of 2005 (DRA) reauthorized the Temporary Assistance for Needy Families (TANF) block grant and made modifications expected to strengthen work requirements for families receiving cash assistance through state TANF programs. Both the U.S. Department of Health and Human Services (HHS) and states were required to take steps to implement these changes. Work participation rates, or the proportion of families receiving TANF cash assistance that participated in work activities, are the key performance measure HHS uses to assess state TANF programs. In response to the economic recession that began in 2007, the American Recovery and Reinvestment Act of 2009 (Recovery Act), provided additional TANF funding to eligible states and made additional modifications to TANF. GAO examined (1) How did DRA affect state TANF programs, including work participation rates? (2) How has the recent economic recession affected state TANF programs? (3) How did the Recovery Act affect state TANF programs? To address these questions, GAO analyzed federal TANF data, as well as relevant federal laws, regulations, and guidance; interviewed HHS officials; surveyed all state TANF administrators; and conducted site visits to meet with state and local officials in Florida, Ohio, and Oregon. GAO is not making recommendations in this report. Nationally, TANF work participation rates changed little after DRA was enacted, though states' rates reflect both recipients' work participation and states' policy choices. Although federal law generally requires that a minimum of 50 percent of families receiving TANF cash assistance in each state participate in work activities, both before and after DRA, about one-third of TANF families nationwide met their work requirements. However, after DRA, many states were able to meet federally required work participation rates because of additional factors. For example, 29 states funded cash assistance for certain families that may be less likely to meet the work requirements with state dollars unconnected to the TANF program, as this removed these families from the rate calculation. Further, DRA required other changes to state TANF programs, and states reported challenges with some of DRA's changes to the TANF work rules, such as verifying participants' actual work hours. From the beginning of the economic recession, in December 2007, to September 2009, the number of families receiving TANF cash assistance, particularly two-parent families, increased in the majority of states but went down in others. At the same time, many states have faced budget deficits and difficult decisions about the use of state resources for TANF programs. Thirty-one states reported that budget constraints led to changes in local TANF service delivery, such as reductions in available services and the number of staff. Forty-six states have applied for the Recovery Act's Emergency Contingency Fund for state TANF programs since it was made available in 2009. More states reported using these funds to maintain their TANF programs rather than expand or create programs and services. Some states reported challenges accessing the funds. For example, some expressed frustration with the amount of time it has taken to receive guidance and responses to questions from HHS, particularly related to qualifying subsidized employment and short-term, nonrecurrent benefit expenditures. State officials also expressed concern about the September 30, 2010, expiration date for the Recovery Act TANF funds.
The Immigration Reform and Control Act of 1986 created the VWP as a pilot program, and the Visa Waiver Permanent Program Act permanently established the program in October 2000. The program’s purpose is to facilitate the legitimate travel of visitors for business or tourism. By providing visa-free travel to the United States, the program is intended to boost international business and tourism, as well as airline revenues, and create substantial economic benefits to the United States. Moreover, the program allows State to allocate more resources to visa-issuing posts in countries with higher risk applicant pools. In November 2002, Congress passed the Homeland Security Act of 2002, which established DHS and gave it responsibility for establishing visa policy, including policy for the VWP. Previously, Justice had overall responsibility for managing the program. In July 2004, DHS created the Visa Waiver Program Oversight Unit within the Office of International Enforcement and directed that unit to oversee VWP activities and monitor participating VWP countries’ adherence to the program’s statutory and policy requirements. In September 2007, the office was renamed the Visa Waiver Program Office. To help fulfill its responsibilities, DHS established an interagency working group comprising representatives from State, Justice, and several DHS component agencies and offices, including U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement. Since the attacks on the United States on September 11, 2001, Congress has passed several other laws to strengthen border security policies and procedures. For example, the Enhanced Border Security and Visa Entry Reform Act of 2002 increased the frequency—from once every 5 years to at least once every 2 years—of mandated assessments of the effect of each country’s continued participation in the VWP on U.S. security, law enforcement, and immigration interests. The 9/11 Act also added security requirements for all VWP countries, such as the requirement that countries enter into an agreement with the United States to share information on whether citizens and nationals of that country traveling to the United States represent a threat to the security or welfare of the United States or U.S. citizens. When the Visa Waiver Pilot Program was established in 1986, participation was limited to eight countries. Since then, the VWP has expanded to 36 countries. Figure 1 shows the locations of the current member countries. To qualify for the VWP a country must offer reciprocal visa-free travel privileges to U.S. citizens; have had a refusal rate of less than 3 percent for the previous fiscal year for its nationals who apply for business and tourism visas; issue machine-readable passports to its citizens; enter into an agreement with the United States to report or make available through Interpol or other means as designated by the Secretary of Homeland Security information about the theft or loss of passports; accept the repatriation of any citizen, former citizen, or national against whom a final order of removal is issued no later than 3 weeks after the order is issued; enter into an agreement with the United States to share information regarding whether citizens and nationals of that country traveling to the United States represents a threat to U.S. security or welfare; and be determined not to compromise the law enforcement (including immigration enforcement) or security interests of the United States by its inclusion in the program. In addition, all passports issued after October 26, 2005, must contain a digital photograph in the document for travel to the United States under the program, and passports issued after October 26, 2006, must be e-passports that are tamper-resistant and incorporate a biometric identifier. Nationals from countries that have joined the VWP since 2008 must use e-passports in order to travel under the VWP. Effective July 1, 2009, all emergency or temporary passports must be e-passports as well for use under the VWP. To be eligible to travel without a visa under the program, nationals of VWP countries must have received an authorization to travel under the VWP through ESTA; have a valid passport issued by the participating country and be a national seek entry for 90 days or less as a temporary visitor for business or have been determined by CBP at the U.S. port of entry to represent no threat to the welfare, health, safety, or security of the United States; have complied with conditions of any previous admission under the program (for example, individuals must not have overstayed the 90-day limit during prior visits under the VWP); if entering by air or sea, possess a return trip ticket to any foreign destination issued by a carrier that has signed an agreement with the U.S. government to participate in the program, and must have arrived in the United States aboard such a carrier; and if entering by land, have proof of financial solvency and a domicile abroad to which they intend to return. Travelers who do not meet these requirements are required to obtain a visa from a U.S. embassy or consulate overseas before traveling to the United States. Unlike visa holders, VWP travelers generally may not apply for a change in status or an extension of the allowed period of stay. Individuals who have been refused admission to the United States previously must also apply for a visa. VWP travelers waive their right to review or appeal a CBP officer’s decision regarding their admissibility at the port of entry or to contest any action for removal, other than on the basis of an application for asylum. DHS has implemented ESTA to meet the 9/11 Act requirement intended to enhance program security and has taken steps to minimize the burden on travelers to the United States added by the new requirement, but it has not fully analyzed the risks of carrier and passenger noncompliance with the requirement. DHS developed ESTA to collect passenger data and complete security checks on the data before passengers board a U.S. bound carrier. In developing and implementing ESTA, DHS took several steps to minimize the burden associated with ESTA use. For example, ESTA reduced the requirement that passengers provide biographical information to DHS officials from every trip to once every 2 years. In addition, because of ESTA, DHS has informed passengers who do not qualify for VWP travel that they need to apply for a visa before they travel to the United States. Moreover, most travel industry officials we interviewed in six VWP countries praised DHS’s widespread ESTA outreach efforts, reasonable implementation time frames, and responsiveness to feedback but expressed dissatisfaction over ESTA fees. Also, although carriers complied with the ESTA requirement to verify ESTA approval for almost 98 percent of VWP passengers before boarding them in 2010, DHS does not have a target completion date for a review to identify potential security risks associated with the small percentage of cases of traveler and carrier noncompliance with the ESTA requirement. Pursuant to the 9/11 Act, DHS implemented ESTA, an automated, Web- based system, to assist in assessing passengers’ eligibility to travel to the United States under the VWP by air or sea before they board a U.S. bound carrier. DHS announced ESTA as a new requirement for travelers entering the United States under the VWP on June 9, 2008, and began accepting ESTA applications on a voluntary basis in August 2008. Beginning January 12, 2009, DHS required all VWP travelers to apply for ESTA approval prior to travel to the United States. DHS began enforcing compliance with ESTA requirements in March 2010, exercising the right to fine a carrier or rescind its VWP signatory status for failure to comply with the ESTA requirement. Although passengers may apply for ESTA approval anytime before they board a plane or ship bound for the United States, DHS recommends that travelers apply when they begin preparing travel plans. Prior to ESTA’s implementation, all travelers from VWP countries manually completed a form—the I-94W—en route to the United States, supplying biographical information and answering questions to determine eligibility for the VWP. DHS officials collected the forms from VWP passengers at U.S. ports of entry and used the information on the forms to qualify or disqualify the passengers for entry into the United States without a visa. DHS uses ESTA to electronically collect VWP applicants’ biographical information and responses to eligibility questions. The ESTA application requires the same information collected through the I-94W forms. When an applicant submits an ESTA application, DHS systems evaluate the applicant’s biographical information and responses to VWP eligibility questions. (See table 1.) If the DHS evaluation results in a denial of the application, the applicant is directed to apply for a U.S. visa. For all other applications, if this review process locates no information requiring further analysis, DHS notifies the applicant that the application is approved; if the process locates such information, DHS notifies the applicant that the application is pending, and DHS performs a manual check on the information. For example, if an applicant reports that a previous U.S. visa application was denied, DHS deems the ESTA application pending and performs additional review. If on further review of any pending application DHS determines that information disqualifies the applicant from VWP travel, the application is denied, and the individual is directed to apply for a visa; otherwise the applicant is approved. Figure 2 illustrates the ESTA application review process. (See app. II for information on how to apply for ESTA.) According to DHS data, the number of individuals submitting ESTA applications increased from about 180,000 per month in 2008, when applying was voluntary, to more than 1.15 million per month in 2009 and 2010 after DHS made ESTA mandatory. DHS approved over 99 percent of the almost 28.6 million ESTA applications submitted from August 2008 through December 2010, but it also denied the applications of thousands of individuals it deemed ineligible to travel to the United States under the VWP. The denial rate has decreased slightly from 0.42 percent in 2008 to 0.24 percent in 2010. (See fig. 3.) DHS data show that DHS denied 77,132 of the almost 28.6 million applications for VWP travel submitted through ESTA from 2008 through 2010. Reasons for denials included applicants’ responses to the eligibility questions, as well as DHS’s discovery of other information that disqualified applicants from travel under the VWP. Examples are as follows: DHS denied 19,871 applications because of applicant responses to the eligibility questions. DHS denied 36,744 pending applications because of the results of manual reviews of passenger data. DHS denied 15,078 applications because the applicants had unresolved cases of a lost or stolen passport that DHS decided warranted an in-person visa interview with a State consular officer. In addition, ESTA applications are regularly reevaluated as new information becomes available to DHS, potentially changing applicants’ ESTA status. In developing and implementing ESTA, DHS has taken steps to minimize the burden associated with ESTA’s use. Less frequent applications. ESTA approval for program participants generally remains valid for 2 years. Prior to ESTA implementation, passengers traveling under the program were required to complete the I- 94W form to determine their program eligibility each time they boarded a carrier to the United States. When DHS implemented ESTA, the burden on passengers increased because DHS also required ESTA applicants to complete an I-94W form. However, on June 29, 2010, DHS eliminated the I- 94W requirement for most air and sea travelers who had been approved by ESTA. According to travel industry officials in the six VWP countries we visited, this change has simplified travel for many travelers, especially business travelers who travel several times each year. DHS officials said the change also eliminated the problems of deciphering sometimes illegible handwriting on the I-94W forms. Earlier notice of ineligibility. ESTA notifies passengers of program ineligibility, and therefore of the need to apply for a visa, before they embark for the United States. Prior to ESTA implementation, passengers from VWP countries did not learn until reaching the U.S. port of entry whether they were eligible to enter under the VWP or would be required to obtain a visa. Because DHS received passengers’ completed I-94W forms at the port of entry, DHS officials did not recommend that carriers prevent passengers from VWP countries from boarding a U.S. bound carrier without a visa unless they were deemed ineligible based on other limited preboarding information provided by carriers. Widespread U.S. government outreach. VWP country government and travel industry officials praised widespread U.S. government efforts to provide information about the ESTA requirements. After announcing ESTA, DHS began an outreach campaign in VWP countries and for foreign government embassy staff in the United States, with the assistance of other U.S. agencies, to publicize the requirement. DHS officials said they spent $4.5 million on ESTA outreach efforts. Although none of the six embassies we visited tracked the costs associated with outreach, each embassy provided documentation of their use of many types of outreach efforts listed in table 2. VWP country government officials and travel industry officials we met said that although they were initially concerned that ESTA implementation would be difficult and negatively affect airlines and many VWP passengers, implementation went more smoothly than expected. Reasonable implementation time frames. Most of the VWP country airline officials with whom we met said that the ESTA implementation time frames set by DHS were reasonable. In 2008, DHS introduced ESTA and made compliance voluntary. The following year, DHS made ESTA mandatory but did not levy fines if airlines did not verify passengers’ ESTA approval before boarding them. This allowed the U.S. government more time to publicize the requirement, according to DHS officials. Enforcement began in March 2010. According to most of the officials we interviewed from 17 airlines in the six VWP countries we visited, the phased-in compliance generally allowed passengers sufficient time to learn about the ESTA requirement and allowed most airlines sufficient time to update their systems to meet the requirement. ESTA officials said that the phased- in compliance also provided time to fix problems with the system before enforcing airline and passenger compliance. DHS responsiveness to travel industry feedback. VWP travel industry officials said that DHS officials’ efforts to adapt ESTA in response to feedback have clarified the application process. Since initial implementation of ESTA in 2008, DHS has issued updates to the system on 21 occasions. According to DHS officials, many of these changes addressed parts of the application that were unclear to applicants. For example, DHS learned from some travel industry officials that many applicants did not know how to answer a question on the application about whether they had committed a crime of moral turpitude because they did not know the definition of “moral turpitude.” In September 2010, DHS released an updated ESTA application that included a definition of the term directly under the question. Further, updates have made the ESTA application available in 22 languages instead of only English. DHS also made it possible for denied applicants to reapply and be approved if they mistakenly answered “yes” to select eligibility questions. Although travel industry officials we met with in six VWP countries said there are still ways ESTA should be improved, they said that DHS’s responsiveness in amending the ESTA application had made the system more user friendly. Shorter reported passenger processing times. According to a study commissioned by DHS and conducted at three U.S. ports of entry, ESTA has reduced the average time DHS takes to process a VWP passenger before deciding whether to admit them into the United States by a range of between 17.8 and 54 percent. The study attributed this time savings to factors such as the reduction in number of documents DHS officers needed to handle and evaluate and the reduction in data entry needed at the port of entry. Although DHS took steps to minimize the burden imposed by ESTA implementation, almost all government and travel industry officials we met in six VWP countries expressed dissatisfaction over the Travel Promotion Act of 2009 (TPA) fee collected as part of the ESTA application. In September 2010, the U.S. government began to charge ESTA applicants a $14 fee when they applied for ESTA approval, including $10 for the creation of a corporation to promote travel to the United States and $4 to fund ESTA operations. According to many of the VWP country government and travel industry officials with whom we met, the TPA fee is unfair because it burdens those traveling to the United States with an added fee to encourage others to travel to the United States. Some of the officials pointed out that it was unrelated to VWP travel and that it runs counter to the program objective of simplifying travel for VWP participants. DHS officials said that many government and travel industry officials from VWP countries view the fee as a step away from visa-free travel and consider ESTA with the fee “visa-lite.” By comparison, a nonimmigrant visitor visa costs over $100 but is generally valid for five times as long as ESTA approval. Several foreign officials said they expected that the fee amount would continue to rise over time. DHS officials stated that they cannot control the TPA portion of the ESTA fee because it was mandated by law. In addition, some airline officials expressed concern that the ESTA requirement was one of many requirements imposed by DHS that required the carriers to bear the cost of system updates. DHS officials said that the ESTA requirement did impose a new cost to carriers, but that it was necessary to strengthen the security of the VWP. According to DHS, air and sea carriers are required to verify that each passenger they board has ESTA approval before boarding them. Carriers’ compliance with the requirement has increased since DHS made ESTA mandatory and has exceeded 99 percent in recent months. DHS data show the following: 2008. In 2008, when VWP passenger and carrier compliance was voluntary, airlines and sea carriers verified ESTA approval for about 5.4 percent of passengers boarded under the VWP. According to DHS officials, carriers needed time to update their systems to receive passengers’ ESTA status, and DHS needed time to publicize the new travel requirement. 2009. ESTA became mandatory in January 2009, and carriers verified ESTA approval for about 88 percent of passengers boarded under the VWP that year. 2010. In March 2010, DHS began enforcing carrier compliance. In that year, carriers verified ESTA approval for almost 98 percent of VWP passengers. As of January 2011, DHS had imposed fines on VWP carriers for 5 of the passengers who had been allowed to board without ESTA approval. Figure 4 shows the percentage of VWP passengers boarded by carriers who had verified the passengers’ ESTA approval. In addition, from September 2010 through January 2011, carrier compliance each month exceeded 99 percent. Although carriers verified ESTA approval for almost 98 percent of VWP passengers before boarding them for VWP travel in 2010, DHS has not fully analyzed the potential risks posed by cases where carriers boarded passengers for VWP travel without verifying that they had ESTA approval. In 2010, about 2 percent—364,086 VWP passengers—were boarded without verified ESTA approval. For most of these passengers—363,438, or about 99.8 percent—no ESTA application had been recorded. The remainder without ESTA approval—648, or about 0.2 percent—were passengers whose ESTA applications had been denied. DHS officials told us that, although there is no official agency plan for monitoring and oversight of ESTA, the ESTA office is undertaking a review of each case of a carrier’s boarding a VWP traveler without an approved ESTA application; however, DHS has not established a target date for completing this review. In its review of these cases, DHS officials said they expect to determine why the carrier boarded the passengers, whether and why DHS admitted these individuals into the United States, and whether the airline or sea carrier should be fined for noncompliance. DHS tracks some data on passengers that travel under the VWP without verified ESTA approval but does not track other data that would help officials know the extent to which noncompliance poses a risk to the program. For example, although DHS officials said that about 180 VWP travelers who arrive at a U.S. port of entry without ESTA approval are admitted to the United States each day, they have not tracked how many, if any, of those passengers had been denied by ESTA. DHS also reported that 6,486 VWP passengers were refused entry into the United States at the port of entry in 2010, but that number includes VWP passengers for whom carriers had verified ESTA approval. Officials did not track how many of those had been boarded without verified ESTA approval. DHS also did not know how many passengers without verified ESTA approval were boarded with DHS approval after system outages precluded timely verification of ESTA approval. Without a completed analysis of noncompliance with ESTA requirements, DHS is unable to determine the level of risk that noncompliance poses to VWP security and to identify improvements needed to minimize noncompliance. In addition, without analysis of data on travelers who were admitted to the United States without a visa after being denied by ESTA, DHS cannot determine the extent to which ESTA is accurately identifying individuals who should be denied travel under the program. Although DHS and partners at State and Justice have made progress in negotiating information-sharing agreements with VWP countries, required by the 9/11 Act, only half of the countries have entered into all required agreements. In addition, many of the agreements entered into have not been implemented. The 9/11 Act does not establish an explicit deadline for compliance, but DHS with support from State and Justice has produced a completion schedule that requires agreements to be entered into by the end of each country’s current or next biennial review cycle, the last of which will be completed by June 2012. In coordination with State and Justice, DHS also outlined measures short of termination that may be applied to VWP countries not meeting their compliance date. The 9/11 Act specifies that each VWP country must enter into agreements with the United States to share information regarding whether citizens and nationals of that country traveling to the United States represent a threat to the security or welfare of the United States and to report lost or stolen passports. DHS, in consultation with other agencies, has determined that VWP countries can satisfy the requirement by entering into the following three bilateral agreements: Homeland Security Presidential Directive 6 (HSPD-6), Preventing and Combating Serious Crime (PCSC), and Lost and Stolen Passports (LASP). According to DHS officials, countries joining the VWP after the 9/11 Act entered into force are required to enter into HSPD-6 and PCSC agreements with the United States as a condition of admission into the program. In addition, prior to joining the VWP, such countries are required to enter into agreements containing specific arrangements for information sharing on lost and stolen passports. As illustrated in table 3 below, DHS, State, and Justice have made some progress with VWP countries in entering into the agreements. All VWP countries and the United States share some information with one another on some level, but the existence of a formal agreement improves information sharing, according to DHS officials. As opposed to informal case-by-case information sharing, formal agreements expand the pool of information to which the United States has systematic access. They can draw attention to and provide information on individuals of whom the United States would not otherwise be aware. According to officials, formal agreements generally expedite the sharing of information by laying out specific terms that can be easily referred to when requesting data. DHS officials observed that timely access to information is especially important for CBP officials at ports of entry. HSPD-6 agreements establish a procedure between the United States and partner countries to share watchlist information about known or suspected terrorists. As of January 2011, 19 of the 36 VWP countries had signed HSPD-6 agreements, and 13 have begun sharing information according to the signed agreements. (See table 3.) Justice’s Terrorist Screening Center (TSC) and State have the primary responsibility to negotiate and conclude these information-sharing agreements. An interagency working group, co-led by TSC and State that also includes representatives from U.S. law enforcement, intelligence, and policy communities, addresses issues with the exchange of information and coordinates efforts to enhance information exchange. While the agreements are based on a template that officials use as a starting point for negotiations, according to TSC officials, the terms of each HSPD-6 agreement are unique, prescribing levels of information sharing that reflect the laws, political will, and domestic policies of each partner country. TSC officials said most HSPD-6 agreements are legally nonbinding. Officials said that this allows more flexibility in information-sharing procedures and simplifies negotiations with officials from partner countries. The TSC officials noted that the nonbinding nature of the agreements may allow some VWP countries to avoid bureaucratic and political hurdles. Noting that State and TSC continue to negotiate HSPD-6 agreements with VWP countries, officials cited concerns regarding privacy and data protection expressed by many VWP countries as reasons for the delayed progress. According to these officials, in some cases, domestic laws of VWP countries limit their ability to commit to sharing some information, thereby complicating and slowing the negotiation process. The terms of HSPD-6 agreements are also extremely sensitive, TSC officials noted, and therefore many HSPD-6 agreements are classified. Officials expressed concern that disclosure of the agreements themselves might either (1) cause countries that had already signed agreements to become less cooperative in sharing data on known or suspected terrorists and reduce the exchange of information or (2) cause countries in negotiation to become less willing to sign agreements or insist on terms prescribing less information sharing. The value and quality of information received through HSPD-6 agreements vary, and some partnerships are more useful than others, according to TSC officials. The officials stated that some partner countries were more willing than others to share data on known or suspected terrorists. For example, according to TSC officials, some countries do not share data on individuals suspected of terrorist activity but only on those already convicted. In other cases, TSC officials stated that some partner countries did not have the technical capacity to provide all information typically obtained through HSPD-6 agreements. For example, terrorist watchlist data include at least the name and date of birth of the suspect and may also include biometric information such as fingerprints or photographs. According to DHS officials, some member countries do not have the legal or technical ability to store such information. TSC has evidence that information is being shared as a result of HSPD-6 agreements. They provided the number of encounters with known or suspected terrorists generated through sharing watchlist information with foreign governments. TSC officials noted that they viewed these data as one measure of the relevance of the program, but not as comprehensive performance indicators. Although TSC records the number of encounters, HSPD-6 agreements do not contain terms requiring partner countries to reveal the results of these encounters, and there is no case management system to track and close them out, according to TSC officials. The PCSC agreements establish the framework for law enforcement cooperation by providing each party automated access to the other’s criminal databases that contain biographical, biometric, and criminal history data. (See table 3.) As of January 2011, 18 of the 36 VWP countries had met the PCSC information-sharing agreement requirement, but the networking modifications and system upgrades required to enable this information sharing to take place have not been completed for any VWP countries. The language of the PCSC agreements varies slightly because, according to agency officials, partner countries have different legal definitions of what constitutes a serious crime or felony, as well as varying demands regarding data protection provisions. Achieving greater progress negotiating PCSC agreements has been difficult, according to DHS officials, because the agreements require lengthy and intensive face-to-face discussions with foreign governments. Justice and DHS, with assistance from State, negotiate the agreements with officials from partner countries that can include representatives from their law enforcement and justice ministries, as well as their diplomatic corps. Further, sharing sensitive personal information with the United States is publicly unpopular in many VWP countries, even if the countries’ law enforcement agencies have no reluctance to share information. Officials in some VWP countries told us that efforts to overcome political barriers have caused further delays. Though officials expect to complete networking modifications necessary to allow queries of Spain’s and Germany’s criminal databases in 2011, the process is a legally and technically complex one that has not yet been completed for any of the VWP countries. According to officials, DHS is frequently not in a position to influence the speed of PCSC implementation for a number of reasons. For example, according to DHS officials, some VWP countries require parliamentary ratification before implementation can begin. Also U.S. and partner country officials must develop a common information technology architecture to allow queries between databases. In a 2006 GAO report, we found that not all VWP countries were consistently reporting data on lost and stolen passports. We recommended that DHS develop clear standard operating procedures for such reporting, including a definition of timely reporting. As of January 2011, all VWP countries were sharing lost and stolen passport information with the United States, and 34 of the 36 VWP countries had entered into LASP agreements. (See table 3.) The 9/11 Act requires VWP countries to enter into an agreement with the United States to report, or make available to the United States through Interpol or other means as designated by the Secretary of Homeland Security information about the theft or loss of passports. According to DHS officials, other international mandates have helped the United States to obtain LASP information. Since 2005, all European Union countries have been mandated to send data on lost and stolen passports to Interpol for its Stolen and Lost Travel Documents database. In addition, Australia and New Zealand have agreements to share lost and stolen passport information through the Regional Movement Alert System. According to officials, in fiscal year 2004, more than 700 fraudulent passports from VWP countries were intercepted at U.S. ports of entry; however, by fiscal year 2010, this number had decreased to 64. DHS officials attributed the decrease in the use of fraudulent passports in part to better LASP reporting to Interpol. More complete data has allowed DHS to identify more individuals attempting VWP travel with a passport that has been reported lost or stolen before they begin travel. Although the 9/11 Act does not establish an explicit deadline, DHS, with the support of partners at State and Justice, has produced a compliance schedule that requires agreements to be entered into by the end of each country’s current or next biennial review cycle, the last of which will be completed by June 2012. In March 2010, State sent a cable to posts in all VWP countries that instructed the appropriate posts to communicate the particular compliance date to the government of each noncompliant VWP country. However, DHS officials expressed concern that some VWP countries may not have entered into all agreements by the specified compliance dates. According to DHS officials, termination from the VWP is one potential consequence for VWP countries that do not enter into information-sharing agreements. However, U.S. officials described termination as undesirable, saying that it would significantly impact diplomatic relations and would weaken any informal exchange of information. Further, termination would require all citizens from the country to obtain visas before traveling to the United States. According to officials, particularly in the larger VWP countries, this step would overwhelm consular offices and discourage travel to the United States, thereby damaging trade and tourism. U.S. embassy officials in France told us that when the United States required only a small portion of the French traveling population—those without machine-readable passports—to obtain visas, U.S. embassy officials logged many overtime hours, while long lines of applicants extended into the embassy courtyard. DHS helped write a classified strategy document that outlines a contingency plan listing possible measures short of termination from the VWP that may be taken if a VWP country does not meet its specified compliance date for entering into information-sharing agreements. The strategy document provides steps that would need to be taken prior to selecting and implementing one of these measures. According to officials, DHS plans to decide which measures to apply on a case-by-case basis. DHS conducts reviews to determine whether issues of security, law enforcement, or immigration affect VWP country participation in the program; however, the agency has not completed half of the mandated biennial reports resulting from these reviews in a timely manner. In 2002, Congress mandated that, at least once every 2 years, DHS evaluate the effect of each country’s continued participation in the program on the security, law enforcement, and immigration interests of the United States. The mandate also directed DHS to determine based on the evaluation whether each VWP country’s designation should continue or be terminated and to submit a written report on that determination to select congressional committees. To fulfill this requirement, DHS conducts reviews of VWP countries that examine and document, among other things, counterterrorism and law enforcement capabilities, border control and immigration programs and policies, and security procedures. To document its findings, DHS composes a report on each VWP country reviewed and a brief summary of the report to submit to congressional committees. In conjunction with DHS’s reviews, the Director of National Intelligence (DNI) produces intelligence assessments that DHS reviews prior to finalizing its VWP country biennial reports. According to VWP officials, they visited 12 program countries in fiscal year 2009 and 10 countries in fiscal year 2010 to gather the data needed to complete these reports. As of February 2011, the Visa Waiver Program Office had completed 3 country visits and anticipated conducting 10 more for fiscal year 2011. If issues of concern are identified during the VWP country review process, DHS drafts an engagement strategy documenting the issues of concern and suggesting recommendations for addressing the issues. According to VWP officials, they also regularly monitor VWP country efforts to stay informed about any emerging issues that may affect the countries’ VWP status. In 2006, we found that DHS had not completed the required biennial reviews in a timely fashion, and we recommended that DHS establish protocols including deadlines for biennial report completion. DHS established protocols in 2007 that include timely completion of biennial reports as a goal. Our current review shows that DHS has not completed the latest biennial reports for 50 percent, or 18 of the 36 VWP countries in a timely manner. Also, over half of those reports are more than 1 year overdue. In the case of two countries, DHS was unable to demonstrate that they had completed reports in over 4 years. Further, according to the evidence supplied by DHS, of the 17 reports completed since the beginning of 2009, over 25 percent were transmitted to Congress 3 or more months after report completion, and 2 of those after more than 6 months. DHS cited a number of reasons for the reporting delays, including a lack of resources needed to complete timely reports. In addition, DHS officials said that they sometimes intentionally delayed report completion for two reasons: (1) because they frequently did not receive DNI intelligence assessments in a timely manner and needed to review these before completing VWP country biennial reports or (2) in order to incorporate anticipated developments in the status of information-sharing agreement negotiations with a VWP country. Further, DHS officials cited lengthy internal review as the primary reason for delays in submitting the formal summary reports to Congress. Without timely reports, it is not clear to Congress whether vulnerabilities exist that jeopardize continued participation in the VWP. The VWP facilitates travel for nationals from qualifying countries, removing the requirement that they apply in-person at a U.S. embassy for a nonimmigrant visa for business or pleasure travel of 90 days or less. In an attempt to facilitate visa-free travel without sacrificing travel security, Congress has mandated security measures such as ESTA, information- sharing requirements, and VWP country biennial reviews. While ESTA has added a fee and a new pretravel requirement that place additional burdens on the VWP traveler, it has reduced the burden on VWP travelers in several other ways. DHS does not fully know the extent to which ESTA has mitigated VWP risks, however, because its review of cases of passengers being permitted to travel without verified ESTA approval is not yet complete. Although the percentage of VWP travelers without verified ESTA approval is very small, DHS oversight of noncompliant travelers may reduce the risk that an individual that poses a security risk to the United States could board a plane or ship traveling to the United States. Even if DHS has authority to deny individuals entry to the United States in such cases, ESTA was designed to screen such individuals before they embark on travel to the United States. Moreover, with only half of the countries participating in the VWP in full compliance with the requirement to enter into information-sharing agreements with the United States, DHS may not have sufficient information to deny participation in the VWP to individuals who pose a security risk to the United States. In addition, the congressional mandate requiring VWP country biennial reports provides important information to Congress on security measures in place in VWP countries but also on potential vulnerabilities that could affect the countries’ future participation in the program. Because DHS has not consistently submitted the reports in a timely manner since the legal requirement was imposed in 2002, Congress does not have the assurance that DHS efforts to require program countries to minimize vulnerabilities and its recommendations for continued status in the VWP are based on up- to-date assessments. To ensure that DHS can identify and mitigate potential security risks associated with the VWP, we recommend that the Secretary of Homeland Security take the following two actions: establish time frames for the regular review and documentation of cases of VWP passengers traveling to a U.S. port of entry without verified ESTA approval, and take steps to address delays in the biennial country review process so that the mandated country reports can be completed on time. DHS provided written comments on a draft of this report. These comments are reprinted in appendix III. DHS, State, and Justice provided technical comments that we have incorporated into this report, as appropriate. In commenting on the draft, DHS stated that it concurred with GAO’s recommendations and expects to be able to implement them. DHS provided additional information on its efforts to ensure that VWP countries remain compliant with program requirements and to monitor and assess issues that may pose a risk to U.S. interests. DHS also provided information on actions it is taking to resolve the issues identified in the audit. For example, DHS stated it will have established procedures by the end of May 2011 to perform quarterly reviews of a representative sample of VWP passengers who do not comply with the ESTA requirement. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Secretary of State, the Attorney General, and other interested parties. The report also will be available on the GAO Web site at no charge at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4268 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix IV. To assess the implementation of the Electronic System for Travel Authorization (ESTA), we reviewed relevant documentation, including 2006 and 2008 GAO reports evaluating the Visa Waiver Program (VWP) and statistics on program applicants and travelers. Between June and September 2010, we interviewed consular, public diplomacy, and law enforcement officials at U.S. embassies in six VWP countries: France, Ireland, Japan, South Korea, Spain, and the United Kingdom. We also interviewed political and commercial officers at embassies in five of these countries. While the results of our site visits are not generalizable, they provided perspectives on VWP and ESTA implementation. We met with travel industry officials, including airline representatives, and foreign government officials in the six countries we visited to discuss ESTA implementation. We selected the countries we visited so that we could interview officials from VWP countries in diverse geographic regions that varied in terms of information-sharing signature status, number of travelers to the United States, and the existence in-country of potential program security risks. We met with officials from the Department of Homeland Security (DHS) in Washington, D.C. We used data provided by DHS from the ESTA database to assess the usage of the program and airline compliance with the ESTA requirements and determined that the data was sufficiently reliable for our purposes. To evaluate the status of information sharing, we analyzed data regarding which countries had signed the agreements and interviewed DHS, Department of State (State), and Department of Justice (Justice) officials in Washington, D.C., and International Criminal Police Organization (Interpol) officials in Lyon, France. We reviewed the Implementing Recommendations of the 9/11 Commission Act of 2007, which contained the information-sharing requirement. We received and reviewed copies of many Preventing and Combating Serious Crime and Lost and Stolen Passport agreements. While conducting our fieldwork, we confirmed the status of the agreements in each of the countries we visited. We determined that the data on the status of information sharing were sufficiently reliable for our purposes. However, we were unable to view the signed Homeland Security Presidential Directive 6 agreements, because Justice’s Terrorist Screening Center declined to provide us requested access to the agreements. We also met with foreign government officials from agencies involved with VWP information-sharing agreement negotiations in the six countries we visited to discuss their views regarding VWP information-sharing negotiations with U.S. officials. In addition, with Interpol officials in France, we discussed the status of the sharing of information on lost and stolen passports. Interpol officials were unable to provide country-specific statistics regarding sharing of lost and stolen passport information due to its data privacy policy. To assess DHS efforts to complete timely biennial reviews of each VWP country, we reviewed DHS documents, as well as the links to completed reviews on the DHS intranet Web site to determine whether the reviews were completed in a timely manner. We also reviewed a 2006 GAO report that recommended improvements to the timeliness of DHS’s biennial reporting process. We conducted this performance audit from January 2010 to May 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The official ESTA application can be completed online at https://esta.cbp.dhs.gov/esta/. (See fig 5.) DHS officials told us they actively publicize the official Web site, because many unofficial Web sites exist that charge an additional fee to fill out an application for an individual. They said the unofficial Web sites are not fraudulent if they do not use the official DHS or ESTA logos and provide the service they promise. In addition to the individual named above, Anthony Moran, Assistant Director; Jeffrey Baldwin-Bott; Mattias Fenton; Reid Lowe; and John F. Miller made key contributions to this report. Martin DeAlteriis, Joyce Evans, Etana Finkler, Richard Hung, Mary Moutsos, Jena Sinkfield, and Cynthia S. Taylor also provided technical assistance.
The Visa Waiver Program (VWP) allows eligible nationals from 36 member countries to travel to the United States for tourism or business for 90 days or less without a visa. In 2007, Congress required the Secretary of Homeland Security, in consultation with the Secretary of State, to implement an automated electronic travel authorization system to determine, prior to travel, applicants' eligibility to travel to the United States under the VWP. Congress also required all VWP member countries to enter into an agreement with the United States to share information on whether citizens and nationals of that country traveling to the United States represent a security threat. In 2002, Congress mandated that the Department of Homeland Security (DHS) review, at least every 2 years, the security risks posed by each VWP country's participation in the program. In this report, GAO evaluates (1) DHS's implementation of an electronic system for travel authorization; (2) U.S. agencies' progress in negotiating informationsharing agreements; and (3) DHS's timeliness in issuing biennial reports. GAO reviewed relevant documents and interviewed U.S., foreign government, and travel industry officials in six VWP countries. DHS has implemented the Electronic System for Travel Authorization (ESTA) and has taken steps to minimize the burden associated with the new program requirement. However, DHS has not fully evaluated security risks related to the small percentage of VWP travelers without verified ESTA approval. DHS requires applicants for VWP travel to submit biographical information and answers to eligibility questions through ESTA prior to travel. Travelers whose ESTA applications are denied can apply for a U.S. visa. In developing and implementing ESTA, DHS has made efforts to minimize the burden imposed by the new requirement. For example, although travelers formerly filled out a VWP application form for each journey to the United States, ESTA approval is generally valid for 2 years. Most travel industry officials GAO interviewed in six VWP countries praised DHS's widespread ESTA outreach efforts, reasonable implementation time frames, and responsiveness to feedback, but expressed dissatisfaction with the costs associated with ESTA. In 2010, airlines complied with the requirement to verify ESTA approval for almost 98 percent of VWP passengers prior to boarding, but the remaining 2 percent-- about 364,000 travelers--traveled under the VWP without verified ESTA approval. DHS has not yet completed a review of these cases to know to what extent they pose a risk to the program. To meet the legislative requirement, DHS requires that VWP countries enter into three information-sharing agreements with the United States; however, only half of the countries have fully complied with this requirement and many of the signed agreements have not been implemented. Half of the countries have entered into agreements to share watchlist information about known or suspected terrorists and to provide access to biographical, biometric, and criminal history data. By contrast, almost all of the 36 VWP countries have entered into an agreement to report lost and stolen passports. DHS, with the support of interagency partners, has established a compliance schedule requiring the last of the VWP countries to finalize these agreements by June 2012. Although termination from the VWP is one potential consequence for countries not complying with the information-sharing agreement requirement, U.S. officials have described it as undesirable. DHS, in coordination with State and Justice, has developed measures short of termination that could be applied to countries not meeting their compliance date. DHS has not completed half of the most recent biennial reports on VWP countries' security risks in a timely manner. According to officials, DHS assesses, among other things, counterterrorism capabilities and immigration programs. However, DHS has not completed the latest biennial reports for 18 of the 36 VWP countries in a timely manner, and over half of these reports are more than 1 year overdue. Further, in the case of two countries, DHS was unable to demonstrate that it had completed reports in the last 4 years. DHS cited a number of reasons for the reporting delays. For example, DHS officials said that they intentionally delayed report completion because they frequently did not receive mandated intelligence assessments in a timely manner and needed to review these before completing VWP country biennial reports. GAO recommends that DHS establish time frames for the regular review of cases of ESTA noncompliance and take steps to address delays in the biennial review process. DHS concurred with the report's recommendations.
LSC is a private, nonprofit corporation that is federally funded for the purpose of making federal resources available to support local providers of civil legal services for low-income people, with the goal of providing equal access to the justice system “for individuals who seek redress of grievances” and “who would be otherwise unable to afford adequate legal counsel.” Since LSC was federally chartered by statute over three decades ago in the LSC Act, Congress has been making annual appropriations to LSC to provide grants to eligible legal service providers to carry out the purposes of the LSC Act’s requirement “to provide the most economical and effective delivery of legal assistance.” Since 1996, LSC has been required to select its grant recipients through a competitive award process. Today, LSC funds grant recipients in all 50 states, as well as the District of Columbia and all five U.S. territories. In fiscal year 2006, LSC reported distributing a total of $313.9 million in grants. Local legal service providers employ staff attorneys to assist eligible clients in resolving their civil legal problems, often through advice and referral. According to LSC, in a typical year the largest portion of total cases (38 percent) concern family matters, followed by housing issues (24 percent), income maintenance (13 percent), and consumer finance (12 percent). LSC reported that most cases are resolved out of court. In 2007, LSC reported that three out of four clients were women, most of them mothers. Most clients were at or below 125 percent of the federal poverty threshold, currently an income of approximately $25,000 a year for a family of four. The type of legal assistance that LSC funding supports is subject to certain legal restrictions. By law, for example, LSC cannot provide funds for legal services for a proceeding related to a violation of the Military Selective Service Act or participation in litigation related to abortion or a criminal proceeding. In 1974, Congress enacted the LSC Act to transfer the functions of the Legal Services Program from the Executive Office of the President into a private corporation. Through the LSC Act, Congress chartered LSC in the District of Columbia as a private, nonmembership, nonprofit corporation that would not be considered a department, agency, or instrumentality of the federal government. Under its federal charter (the LSC Act), LSC may only pursue activities consistent with the corporate purpose of “providing financial support for legal assistance in noncriminal proceedings or matters to persons financially unable to afford legal assistance.” To direct the corporation, the LSC Act provides for a bipartisan Board of Directors consisting of 11 voting members who are appointed by the President of the United States with the advice and consent of the U.S. Senate. Neither the President nor the Senate has the power to remove a director. A director can only be removed for cause, such as a persistent neglect of duties, by a vote of at least 7 directors. Although the LSC Act does not require board members to possess management or financial expertise, it does include some membership requirements: no director may be a full-time U.S. government employee, a majority of the directors must be attorneys belonging to the bar of the highest court of a U.S. state, and at least one director must be from the legal service client community. The LSC Act requires the board to meet at least four times each calendar year and prohibits board members from participating in any decision, action, or recommendation related to a matter that directly benefits the board member or pertains specifically to any entity with which the board member has been associated in the past 2 years. The LSC Act prohibits LSC personnel and grant recipients from engaging in certain prohibited activities, such as legal assistance related to a criminal proceeding or participation in litigation related to an abortion, and the LSC Board of Directors, which is charged with managing the affairs of the corporation, is responsible for ensuring compliance with these restrictions. The LSC Act requires the Board of Directors to appoint the LSC President and any other necessary officers, and provides that the LSC President may appoint any employees necessary to carry out LSC’s purposes. LSC officers and employees can be fairly easily appointed and removed, creating essentially at-will employment relationships. In addition to the power to appoint and remove LSC employees and to serve as an ex- officio, nonvoting member of the Board of Directors, the LSC President, who is the only officer specifically provided for in the LSC Act, is authorized to make grants and enter into contracts that bind LSC. As a D.C. nonprofit corporation, LSC generally possesses all the powers conferred on such corporations under the D.C. Nonprofit Corporation Act, which includes a number of general corporate powers, such as the power to sue and be sued in its corporate name, exercise a number of rights related to real and personal property, enter into contracts, and borrow money and issue debt obligations. Other corporate powers include investing and lending money, appointing officers and agents and defining their duties and fixing their compensation, making bylaws to administer and regulate corporate affairs, and “hav and exercis all powers necessary or convenient to effect any or all of the purposes for which the corporation is organized.” LSC’s exercise of such corporate powers, however, is restricted where inconsistent with the LSC Act. For example, the LSC board’s discretion in fixing its officers’ and employees’ compensation is limited by an LSC Act provision prohibiting LSC from compensating its personnel at rates in excess of the rate of level V of the Executive Schedule. Unlike most D.C. nonprofit corporations, LSC’s exercise of its corporate powers has received additional oversight since 1988 when Congress subjected LSC to the Inspector General Act of 1978, as amended (IG Act). As an independent office within LSC, the LSC OIG is authorized to carry out audits and investigations of LSC programs and operations, recommend policies to improve program administration and operations, and keep the LSC board and Congress fully and currently informed about problems in program administration and operations and the need for and progress of corrective action. Also, unlike most D.C. nonprofit corporations, LSC is subject to congressional oversight through the annual appropriations process as well as responding to congressional inquiries and participating in hearings. In its annual appropriation for LSC, Congress regularly appropriates a specific amount for the OIG. For example, Congress appropriated about $2.54 million for the LSC OIG in fiscal years 2006 and 2007. Because in fiscal year 2007 LSC received an increase in its annual appropriation of about $17.78 million that was not allocated for a specific purpose, LSC officials told us that LSC, consistent with congressional guidance, used $430,000 of this amount to increase funding for the OIG from about $2.54 million in fiscal year 2006 to $2.97 million in fiscal year 2007. (See fig. 1.) It has been three decades since LSC was last comprehensively reviewed and reauthorized in the Legal Services Corporation Amendments Act of 1977, and LSC’s statutory framework has undergone only limited changes since then. Today LSC is governed by the powers and restrictions in its federal charter (the LSC Act) and, where not inconsistent, the D.C. Nonprofit Corporation Act, as well as the IG Act, the federal tax law requirements for tax-exempt status for nonprofit corporations, and LSC’s annual appropriations acts, which since 1996 have included a number of administrative provisions imposing additional grants management duties. Unlike most private, nonprofit corporations, the vast majority of LSC’s funding comes from annual federal appropriations, which originally were authorized under the LSC Act. The LSC Act specifies that the appropriated funds authorized under the act are available until expended and shall be paid to LSC in one annual installment at the start of the fiscal year. Although annual appropriations for LSC have not been authorized since fiscal year 1980 under the LSC Act, Congress has continually enacted annual appropriations to be paid to LSC to carry out the purposes of the LSC Act. For fiscal year 2007, Congress appropriated almost $349 million for LSC. The LSC Act permits LSC to receive and retain nonfederal funds, but LSC’s recent audited financial statements show that for fiscal years 1991 through 2006, approximately 99 percent of LSC’s revenues came from federal appropriations. In addition to direct funding through annual appropriations, the LSC Act makes certain indirect federal support available to LSC by providing that the President of the United States may make support functions of the federal government available to LSC. For both governmental and nonprofit entities, governance can be described as the process of providing leadership, direction, and accountability in fulfilling the organization’s mission, meeting objectives, and providing stewardship of public resources, while establishing clear lines of responsibility for results. Accountability represents the processes, mechanisms, and other means—including financial reporting and internal controls—by which an entity’s management carries out its stewardship and responsibility for resources and performance. To provide accountability to Congress, the LSC Act provides for Senate advice and consent on the selection of board members, annual appropriations that constitute virtually all of LSC’s annual revenues, and treatment of LSC as a federal entity in limited situations either by directly subjecting LSC to certain federal laws or indirectly by modeling provisions in the LSC Act after provisions in laws existing in the 1970s. For example, the LSC Act makes LSC subject to provisions in the Freedom of Information Act (FOIA) and the Government in the Sunshine Act, compensation limits imposed on officers and employees at level V of the Executive Schedule, and employer contribution requirements for participation in certain employee benefits programs, as well as requiring LSC to engage in notice- and-comment rule making and to provide us with access to its records. Although LSC is subject to more statutory governance and accountability requirements than most private, nonprofit corporations, it is subject to governance and accountability requirements that are weaker than those of most independent federal agencies headed by boards or commissions and U.S. government corporations. In chartering a private, nonprofit corporation to perform a public assistance role with federal funding, Congress in the 1970s included certain provisions in the LSC Act to provide for governance and accountability. The LSC Act includes provisions providing that LSC shall be treated like a federal agency for purposes of specified statutes that existed in the 1970s when the LSC Act was first enacted and amended. In 1988, Congress created an OIG within LSC. Therefore, LSC is subject to some governance and accountability requirements that are comparable to those of federal entities, including the presence of an OIG in the governance structure and submission of its budget for the congressional appropriations process. Nonprofit corporations typically are subject to limited federal requirements related to governance and accountability; however, as discussed later, nonprofit corporations have voluntarily chosen to incorporate many practices in these areas. In other respects, LSC is not subject to standard governance and accountability requirements for federal entities including provisions related to performance and financial reporting, internal controls, and funds control. Additional management areas are discussed in appendix III, and an expanded table is in appendix IV. Similar to most independent federal agencies and U.S. government corporations, LSC is headed by a multiperson body (i.e., commission or board of directors) consisting of presidentially appointed and Senate- confirmed members and has an OIG. (See table 1.) A common form of governance for independent federal agencies and U.S. government corporations is a multiperson body consisting of either a board of directors (agencies and corporations) or a commission (only agencies), both of whose members are generally appointed by the President of the United States and confirmed by the U.S. Senate. For example, the President appoints and the Senate confirms the members of the boards of directors for the Federal Deposit Insurance Corporation (FDIC) and Pension Benefit Guaranty Corporation (PBGC) (both U.S. government corporations), the National Science Foundation (NSF) and the Federal Housing Finance Board (both independent federal agencies), as well as the commissioners of the Securities and Exchange Commission (SEC) and the Nuclear Regulatory Commission (NRC) (both independent federal agencies). The directors of LSC may only be removed for cause by a vote of seven other directors. This level of statutory removal protection is unique in two ways. First, it restricts the reasons for removal to only those listed in the statute, and second, it precludes removal by the President of the United States or Congress. In many cases, the board or commission members of a federal entity have less tenure protection and serve at the will of the President of the United States, such as the PBGC directors, who are the Secretaries of Labor, the Treasury, and Commerce. Nonprofit corporations incorporated in the District of Columbia are required to be managed by a board of directors, consisting of at least three directors, who serve for the terms specified in the articles of incorporation or bylaws. A director of a D.C. nonprofit corporation may be removed by any procedure provided in the articles of incorporation or bylaws. If not so provided, then removal with or without cause is permitted upon a vote that would suffice for the election of a director for the organization. No federal law specifically requires the board of directors of a U.S. government corporation or a board of directors or commission of an independent federal agency to designate audit or other committees, but neither does any law prohibit the establishment of such committees. The D.C. Nonprofit Corporation Act expressly authorizes, but does not require, boards of nonprofit corporations to designate and delegate authority to committees. In certain instances, the statutes establishing federal entities may authorize the designation and delegation of authority to committees, such as the statute governing NSF (an independent federal agency). Since 1977, there has been only one governmentwide management law that specifically included LSC as a covered entity and thus required a change to LSC’s governance structure. In 1988, Congress amended the IG Act to add OIGs to additional entities receiving significant federal funding, including “designated federal entities” (DFE), which are statutorily defined. LSC was listed as a DFE, along with such other entities as PBGC, SEC, and Amtrak, which are, respectively, a wholly owned U.S. government corporation, an independent federal agency, and a federally established private, for-profit corporation that receives some federal funding. The only other private, nonprofit corporation included as a DFE was CPB. Like the other OIGs of DFEs that are independent federal agencies and U.S. government corporations, the LSC OIG was created as an “independent and objective” office to carry out audits and investigations of LSC programs and operations, recommend policies to improve program administration and operations, and keep the LSC board and Congress fully and currently informed about problems in program administration and operations and the need for and progress of corrective action. In its annual appropriation for LSC, Congress regularly appropriates a specific amount for the OIG. For example, Congress appropriated about $2.54 million for the LSC OIG in fiscal years 2006 and 2007. Because in fiscal year 2007 LSC received an increase in its annual appropriation of about $17.78 million that was not allocated for a specific purpose, LSC officials told us that LSC, consistent with congressional guidance, used $430,000 of this amount to increase funding for the OIG from about $2.54 million in fiscal year 2006 to $2.97 million in fiscal year 2007. Like other private, D.C. nonprofit corporations, LSC is not subject to federal funds control laws that generally apply to independent federal agencies and many U.S. government corporations, including the Antideficiency Act, the Purpose Statute, and laws governing liability of accountable officers for improper or illegal uses of funds; however, LSC is required to submit an annual budget request to Congress. (See table 2.) Like many independent federal agencies and wholly owned government corporations, most of LSC’s annual revenues come from federal funds made available through annual appropriations; however, LSC is not required by law to control its use of those funds as are independent federal agencies and wholly owned U.S. government corporations. The Antideficiency Act, among other things, prohibits officers and employees of the government from obligating or expending funds in advance of or in excess of appropriations. This applies to the officers and employees of independent federal agencies and wholly owned U.S. government corporations, where personnel are officers and employees of the government. The Purpose Statute requires federal agencies and all U.S. government corporations, both mixed ownership and wholly owned, to use appropriated funds only for the purposes provided in law. Further, for most federal agencies and some wholly owned U.S. government corporations, such as the Tennessee Valley Authority and the Federal Prisons Industries Incorporated, accountable officers are financially liable for improper or illegal payments. None of these funds control statutes apply to LSC or, in general, other nonprofit corporations that receive federal funds. The LSC Act does contain a number of provisions that restrict the use of LSC’s appropriated funds for certain purposes, such as an activity that would influence the passage or defeat of any legislation at the local, state, or federal level or that would support any political party or campaign of any candidate for public office. Unlike D.C. nonprofit corporations in general, and like independent federal agencies and wholly owned U.S. government corporations, each year LSC must prepare a new budget request as part of the annual appropriations process. The LSC Act requires LSC to submit a budget request to Congress, but provides no requirements related to the form and content of the budget request. For federal agencies and wholly owned U.S. government corporations, the Office of Management and Budget (OMB) prescribes the form and content of budget requests, consistent with specified statutory requirements, that are submitted through the President to Congress. Under the LSC Act, LSC submits that budget request directly to Congress, with OMB’s role limited to submitting comments to Congress if it chooses to review LSC’s budget. As a federally chartered, private nonprofit D.C. corporation, CPB also must annually prepare a budget request as part of the annual appropriations process. Unlike LSC, however, CPB requests and receives funding for 2 years (i.e., funding for fiscal 2008 was provided in the fiscal year 2006 appropriations act.) Once the level of the annual appropriations act is enacted, CPB’s appropriation is paid into the Public Broadcasting Fund, which is a fund established in the Treasury and administered by the Secretary of the Treasury. In accordance with CPB’s federal charter, CPB determines how to allocate amounts in the fund. Unlike D.C. nonprofit corporations in general but like CPB, the LSC Act requires LSC to have its accounts audited annually. By contrast, independent federal agencies and U.S. government corporations are subject to more detailed financial and performance planning and reporting requirements. When the LSC Act was enacted in the 1970s, audited financial statements were not prepared for federal agencies and LSC as a private, nonprofit corporation was not subject to the financial audit requirements imposed on public companies and U.S. government corporations. The LSC Act requires LSC to have its accounts audited by an independent public accountant annually in accordance with generally accepted auditing standards (GAAS). The LSC Act does not detail what must be included in the report or which accounting standards to use. The LSC Act requires LSC to file this annual audit report with us and make the audit report available for public inspection at LSC headquarters during normal business hours. (See table 3.) The LSC Act requirements for financial reporting are more rigorous than the requirements for D.C. nonprofit corporations in general but less than those for CPB. Most D.C. nonprofit corporations are only required to keep correct and complete books and records of account and minutes of the proceedings of their boards of directors. This information is not required to be published or made available for public inspection. Similar to LSC, CPB is required to annually have its accounts audited by an independent public accountant in accordance with GAAS. CPB’s audit report must be included in its annual report on its operations and activities, which it must submit to the President for transmittal to Congress. Like most D.C. nonprofit corporations, LSC is not required to submit a similar annual report on its operations and activities to the President or Congress. Independent federal agencies and U.S. government corporations have stronger financial and performance reporting requirements than LSC. The Chief Financial Officers Act of 1990 (CFO Act), as amended by the Government Management Reform Act of 1994 (GMRA), requires the major 24 agencies of the federal government, including some independent federal agencies such as NSF and NRC, to submit annual audited financial statements to OMB and Congress. These financial statements must be prepared in accordance with generally accepted accounting principles and audited in accordance with applicable generally accepted government auditing standards (GAGAS). The Accountability of Tax Dollars Act of 2002 (ATDA) expanded this requirement to include most other federal executive agencies. U.S. government corporations had been subject to financial reporting requirements for many years under the Government Corporation Control Act. Chapter 91 of Title 31 of the U.S. Code, commonly known as the Government Corporation Control Act, requires both mixed-ownership and wholly owned U.S. government corporations to submit annual management reports to Congress (with copies to the President, OMB, and GAO) no later than 180 days after the end of the government corporation’s fiscal year. OMB has accelerated the submission deadline to no later than 45 days after the end of the government corporation’s fiscal year. Annual management reports are required to include a statement of financial position; statement of operations; statement of cash flows; reconciliation to the budget report of the corporation, if applicable; statement of internal accounting and administrative control systems by the head of corporation management, consistent with the requirements under amendments to the act made by 31 U.S.C. § 3512 (c), (d), commonly referred to as the Federal Managers’ Financial Integrity Act of 1982 (FMFIA); a financial statement audit report prepared in accordance with GAGAS; any other information necessary to inform Congress about the operations and financial condition of the corporation. Under OMB Circular No. A-136, Financial Reporting Requirements (rev. July 24, 2006), annual performance and accountability reports (PAR) issued by federal executive agencies consist of the annual performance report required by the Government Performance and Results Act of 1993 with audited financial statements and other disclosures, such as agencies’ (1) assurances on internal control, (2) accountability reports by agency heads, and (3) inspectors general’s assessments of the agencies’ most serious management and performance challenges. OMB Circular No. A- 136 states that PARs are intended to provide financial and performance information to enable the President, Congress, and the public to assess the performance of a federal agency relative to its mission and to demonstrate the federal agency’s accountability. LSC follows a fiscal year starting on October 1, and for the past 5 years has issued its financial statements in March or later, which is 6 months after its year-end. As noted, federal agencies are required to issue their financial statements 45 days following their year-ends, which is mid-November. LSC’s statutory requirements for internal control systems are less rigorous than those for independent federal agencies or U.S. government corporations; D.C. nonprofit corporations have no such statutory requirements. (See table 4.) The LSC Act requires LSC to account for federal funds separately from nonfederal funds, but otherwise includes no specific requirements for the establishment of accounting and internal control systems. The LSC Act imposes some program management duties on the LSC directors to promote good stewardship of federal taxpayer dollars by requiring that the directors manage LSC’s programs economically, effectively, and efficiently. For example, the LSC Act requires the LSC board to ensure that LSC makes grants “so as to provide the most economical and effective delivery of legal assistance to persons in both urban and rural areas.” The LSC Act also requires the board to ensure that grant recipients adopt procedures for determining priorities on how to allocate their assistance among eligible clients. Additionally, the LSC Act imposes a program evaluation requirement on the board, requiring it to monitor, evaluate, and provide for independent evaluations of LSC-supported programs to ensure that the programs comply with the LSC Act; bylaws; and implementing rules, regulations, and guidelines. Although the LSC Act includes program management requirements, these are much less rigorous than requirements for systems of internal control, to which federal entities are subject. Managers of federal entities depend on sufficient internal control to achieve desired results through effective stewardship of organizational resources. Internal control, which supports performance-based management, involves the methods and procedures management uses to have reasonable assurance that objectives, such as the following, are being met: effectiveness and efficiency of operations, reliability of financial reporting, and compliance with applicable laws and regulations. Federal agencies are subject to the following legislative and regulatory requirements that promote and support effective internal control. FMFIA, or 31 U.S.C. § 3512(c), (d), provides the statutory basis for management’s responsibility for and assessment of internal control. OMB Circular No. A-123, Management’s Responsibility for Internal Control (rev. Dec. 21, 2004), sets out the guidance for implementing the statute’s provisions, including agencies’ assessment of internal control under the standards prescribed by the Comptroller General. Agencies are required to annually provide a statement of assurance on the effectiveness of internal control. U.S. government corporations are not subject to FMFIA, but they are subject to similar requirements under the Government Corporation Control Act, which incorporates by reference the FMFIA standards in requiring U.S. government corporations to include in their annual management reports a statement on internal accounting and administrative control systems. The CFO Act requires the 24 CFO Act agencies’ chief financial officers (CFO), including the CFOs of such independent federal agencies as NSF and NRC, to maintain an integrated accounting and financial management system that includes financial reporting and internal controls. The Federal Financial Management Improvement Act of 1996, as implemented by OMB Circular No. A-127, Financial Management Systems (rev. Dec. 1, 2004), requires the 24 CFO Act agencies to implement and maintain integrated financial management systems that comply substantially with federal financial management system requirements, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. Recent federal governmentwide initiatives have contributed to improvements in financial management and placed greater emphasis on implementing and maintaining effective internal control over financial reporting. In December 2004, OMB issued a significant update to its Circular No. A-123, which is the implementing guidance for FMFIA. The update requires the 24 CFO Act agencies to include the FMFIA annual report in their PARs, under the heading “Management Assurances.” The FMFIA annual report must include a separate assurance on internal control over financial reporting, along with a report on identified material weaknesses and actions taken by management to correct those weaknesses. FMFIA and OMB Circular No. A-123 apply to each of the three objectives of internal control outlined in GAO’s Standards for Internal Control in the Federal Government: effective and efficient operations, reliable financial reporting, and compliance with applicable laws and regulations. OMB Circular No. A-123 calls for internal control standards to be applied consistently toward each of the objectives. The circular’s new Appendix A, which is a requirement only for the 24 CFO Act agencies, requires management to document the process and methodology for applying A-123 standards when assessing internal control over financial reporting. One important area of internal control today for both independent federal agencies and U.S. government corporations is the development and implementation of an entitywide information security program, as required by the Federal Information Security Management Act of 2002 (FISMA). As part of that program, FISMA requires entity heads to periodically (1) perform risk assessments of the harm that could result from information security problems, such as the unauthorized disclosure or destruction of information; (2) test and evaluate the effectiveness of elements of the information security program; and (3) provide security awareness training to personnel and contractors. FISMA also requires the federal entity to annually have its OIG or an external auditor perform an independent evaluation of the entity’s information security programs and practices to determine their effectiveness and to annually submit a report on the adequacy and effectiveness of information systems to OMB, GAO, and Congress. Because it is not a federal entity, LSC, like CPB and other D.C. nonprofit corporations, is not subject to FISMA and has no special information security requirements. LSC board members are actively engaged in the board meeting process as they consistently attend and prepare for board and committee meetings. Board meetings are generally attended by all board members. Board members are provided with an agenda and related materials prior to each board meeting. In addition, board members have interaction with both management and the Inspector General (IG). Nevertheless, the current board governance practices of LSC’s board fall short of current accepted practices employed by boards of nonprofit corporations and public companies. Although LSC has an informal orientation program for its members, the board does not have a comprehensive, formal orientation or an ongoing training program for board members. Keeping up with current practice is especially important for the LSC board because board composition changes significantly with each new presidential administration, resulting in a board that generally does not have the benefit of experienced members. Also, although the board has four established committees, it has not updated its committee structure to include an audit committee or other committees commonly found in nonprofit corporations or public companies today. In addition, the board’s current committees do not have charters that identify their purposes and duties, which boards of similar organizations would typically have. Finally, the board does not assess its own performance. Because it has not incorporated many practices currently considered necessary for effective governance, LSC’s Board of Directors is at risk of not fulfilling its role in effective governance in keeping with its fiduciary duties. In fact, recent incidents of compensation rates that exceed statutory limitations, questionable expenditures, and potential conflicts of interest may have been prevented by a properly implemented governance structure. The current LSC board’s 10 members have attended most or all of the board meetings in recent years. A few board members indicated that their LSC board member role has been more time consuming than they had expected or had experienced as board members with other organizations. According to our survey, most board members are satisfied or very satisfied with the frequency of the board meetings as well as the timeliness and completeness of the information provided (in the board books) to the board members to prepare for meetings. Board members are provided with an agenda and a package of related materials to assist them in preparing for each board meeting. During interviews with us, board members indicated that they also receive information regularly through e- mails and mailings in addition to the board books—primarily from the LSC President. Board members were generally satisfied with their interaction with management, according to our survey, while board members interviewed indicated a range of interaction with the IG—some members only receive information such as the IG reports while others directly discuss issues with the IG. The LSC board has established a conflicts-of- interest policy that requires board members to annually file financial, ownership, and relationship disclosure reports. LSC’s current board of directors carries out its activities primarily during the quarterly meetings of the full board and individual committees. Although the board has established committees with specific members, the committee meetings are typically not held concurrently and most, if not all, board members attend all of the committee meetings, which one board member felt was redundant. The annual board meeting is typically held in January in Washington, D.C., while the remaining three board meetings take place during site visits, most recently at Little Rock, Arkansas, in April 2007. As needed, the board and committees hold additional meetings or teleconference calls to handle necessary business. Semiannually, the board issues a report to Congress that discusses LSC’s accomplishments. The board’s most recent activities have included the finance committee reviewing financial results and discussing the budget, the annual performance committee completing its performance appraisal of the LSC President and IG, and the operations and regulations committee reviewing the proposed employee handbook, approving the handbook, and providing the handbook to the board for its review and approval. The LSC board currently has an informal orientation program whereby its members are introduced briefly to the LSC program and legal requirements, but the orientation does not include key information on oversight and fiduciary responsibilities. LSC’s orientation program also does not provide specific information on Washington, D.C. law governing nonprofits; the Internal Revenue Service (IRS) regulatory requirements for nonprofit organizations; interpreting LSC’s financial statements; managing sensitive documents; FOIA requirements; or travel expenditure limitations. New director training is a basic tool used by well-functioning boards. It takes time for board members to learn about the responsibilities of their positions and the workings of the organization. If board members do not receive a comprehensive orientation about their responsibilities and the unique requirements of the organization they are responsible for directing, then they must learn as they serve, potentially reducing their effectiveness in fulfilling their governance roles and responsibilities as they learn. Current practice for public companies and nonprofit corporations is to provide board members with a broad-based orientation that encompasses the organization’s mission, vision, and strategic plan; its history; the members’ obligations and performance objectives, and board policies on meetings and attendance; and board member job descriptions, including their performance expectations and their fiduciary obligations. The purpose of such a program is to prepare board members for effectively fulfilling their oversight and governance role in the organization. Most (7 out of 10) of the current board members, in responding to our survey, indicated that they received orientation or training on their responsibilities as a board member. During interviews, some board members who had attended orientation said it consisted of a day of individual meetings, which was helpful. Our review of the orientation materials provided to us by management indicated that topics covered included the role of the IG and the General Counsel. During interviews, board members who did and did not receive orientation indicated that LSC could improve board member orientation. For instance, one board member said that the 1-day orientation provided an understanding of what LSC does, but did not necessarily provide general training on how to be a board member. The LSC board also does not have an ongoing (e.g., annual) training program for its board members. A board needs to stay current with information on changes in governance practices and in its regulatory environment. Additionally, a board needs to be kept up-to-date on key management practices and requirements in such areas as risk assessment and mitigation, internal controls, and financial reporting so that the board can oversee management’s key processes. As the environment that a board operates in changes, new issues—whether regulatory, current practice, or industry specific—emerge with the changes. For instance, although most of the requirements of the Sarbanes-Oxley Act of 2002 do not apply to a nonprofit corporation or its board, it has had a significant impact on the operating environment, and many of its requirements have become current practice for nonprofit corporations. An ongoing training program enables a board to stay abreast of current governance practices and fiduciary duties. When we interviewed board members, some noted that they stay current on governance practices by reading materials provided by professional associations, LSC management, or the IG, as well as through seminars they may attend as part of their role on LSC or other boards. While this individual initiative is valuable, board members’ experience and knowledge varies, and without an ongoing training program that can equip all members with the same knowledge, board members risk being unable to work together as an efficient and effective body. A board establishes committees to aid the board’s organization and facilitate accomplishing the board’s work. Depending upon the board’s needs, committees may be either standing (permanent) or ad hoc (for a particular activity). Committees handle specific issues or topics and make policy recommendations for the full board to consider. LSC’s board has four standing committees. However, it does not have an audit committee, compensation committee, or ethics/compliance (ethics) committee—all of which are commonly found in public companies and nonprofit organizations. Table 5 lists LSC’s current board committees and the responsibilities of each committee. LSC’s board does not have an audit committee, which is a key element in effective corporate governance today. According to the National Council on Nonprofits Association, an audit committee provides independent oversight of the organization’s accounting and financial reporting and oversees the organization’s annual audits. An audit committee is generally responsible for the appointment, compensation, and oversight of the external auditor; handling board communication with the external auditor regarding financial reporting matters; and overseeing the entity’s financial reporting and the adequacy of internal control over financial reporting. The audit committee also serves the important role of assuring the full board of directors that the entity has the appropriate culture, personnel, policies, systems, and controls in place to safeguard entity assets and to accurately report financial information to internal and external users. Under the Sarbanes-Oxley Act of 2002, public companies are required to have an audit committee made up of independent directors, including at least one financial expert, to oversee the company’s financial reporting and audit processes. Although LSC’s board has a finance committee, the finance committee’s responsibilities do not include those responsibilities required of public company audit committees or those recommended for nonprofit organizations’ audit committees. In general, the LSC board’s finance committee is responsible for reporting on legislation and LSC’s appropriations as well as monitoring LSC’s budget. Given LSC’s status as a federally funded nonprofit corporation, these are important activities that are appropriately handled by a board-level committee. However, the finance committee’s current functions do not include overseeing the audit process or communicating with the auditor about financial reporting matters, which generally are the responsibilities of the IG. The finance committee chair indicated to us that he has had minimal interaction— primarily discussion about the annual meeting presentation—with the independent auditor. New auditing standards reinforce the importance of communication between the auditor and those overseeing governance of an entity—typically the audit committee representing the board. FDIC, a mixed-ownership U.S. government corporation, which like LSC, has an IG who is responsible for appointing the external auditor, established an audit committee with the responsibility of ensuring that IG recommendations get appropriately implemented by the organization. An audit committee at LSC could enhance the governance structure by representing the board in communicating with the external auditor and the IG, and ensuring that IG recommendations and any weaknesses found during the financial audit process are appropriately addressed by LSC’s management. In addition, an audit committee’s oversight of LSC’s financial reporting on behalf of the board would enhance the board’s effectiveness. LSC’s board does not have a compensation committee. A compensation committee is an accepted current practice for nonprofit corporations and required for public companies listed on the New York Stock Exchange (NYSE). A compensation committee of a board monitors the compensation structure of the organization. According to the publication Corporate Governance Best Practices, the compensation committee’s responsibilities should include overseeing the organization’s compensation structure, policies, and programs; establishing or recommending to the board performance goals and objectives for members of senior management; and establishing or recommending to the independent directors compensation for the chief executive officer. For LSC, this would include approving the LSC President’s contract, which includes the length of the contract and amount of compensation, and providing oversight for LSC’s compensation and structure. LSC currently does have an annual performance review committee that is responsible for annually evaluating the performance of the LSC President and IG, but it is not responsible for the compensation structure and policies for the organization. For advice on complex compensation matters, board compensation committees frequently use outside consultants. One such matter is tracking the total cost of senior management’s compensation packages so the board has a full understanding of the organization’s executive compensation. For LSC, an outside consultant could assist the board in understanding the statutes and regulations that specifically apply to LSC officer and employee compensation. It is also a current practice that the minutes of the compensation committee reflect and record arm’s length negotiations with the executive and his or her attorney, including each proposal and counter offer. Current practice also has the internal auditor verify that compensation paid to senior management did not exceed what the board approved. During our work, we noted that the fiscal year 2006 salaries of all five LSC officers, three LSC OIG personnel (including the IG), and four LSC employees exceeded the statutory compensation limitation. Each affected officer’s or employee’s total salary in fiscal year 2006 exceeded the annual limitation on the rate of compensation established by the LSC Act as the rate of level V of the Executive Schedule. Because the compensation of LSC personnel is limited by the LSC Act to this rate, we questioned why certain personnel received higher rates of pay. LSC officials told us that the total salary included basic pay and a locality pay adjustment. The locality portion of their compensation caused the compensation limitation to be exceeded for the affected LSC personnel. After we asked LSC officials to justify this practice, they told us that during 2007 LSC’s board had engaged outside legal counsel to issue an opinion on whether LSC violated the statutory compensation limitation. In May 2007, the outside counsel issued an opinion to LSC concluding that LSC had not complied with the statutory limitation on the rate of compensation. We agree with outside counsel’s conclusion. Although LSC senior management did not state whether it agrees with the outside counsel’s conclusion in its legal opinion, LSC management told us that it is working with the LSC Board of Directors and LSC’s appropriations and authorizing committees to take appropriate corrective action. We also noted that during the board’s most recent contract renewal negotiations with LSC’s President, the Chairman of the board conducted contract renewal negotiations, based on a delegation of this responsibility from the full board. However, the contract renewal negotiations were conducted before the annual performance committee had given the LSC President her annual review in January 2007 and, thus, without the benefit of information from the performance evaluation. Exceeding the limitations on the annual rate of compensation for certain LSC personnel and conducting negotiations of the president’s contract renewal without relevant performance evaluation information could have been avoided with properly designed and implemented procedures for overseeing LSC’s compensation structure and policies. Without a properly designed and implemented process for overseeing compensation, LSC remains at risk of not complying with related laws and regulations and engaging in imprudent management practices. While operating in an ethically sensitive environment, the LSC board does not have an ethics committee. An ethics committee is responsible for ensuring that the corporation has systems in place to provide assurance over employee compliance with the corporation’s code of conduct and ethics, which LSC also does not have. Ethics is important as a component of the control environment that helps to set the tone at the top of an organization. According to Standards for Internal Control in the Federal Government, a positive control environment includes integrity and ethical values that are provided by leadership through setting and maintaining the organization’s ethical tone, providing guidance for proper behavior, removing temptations for unethical behavior, and providing discipline when appropriate. Having an ethics committee on the board is emerging current practice for providing independent oversight over the organization’s code of conduct and systems in place to help ensure employee compliance with the code. In recent years, LSC management has engaged in practices that may have been prevented through effective implementation of strong ethics policies. In September 2006, LSC’s OIG issued a report detailing these practices at LSC, based on a request from Congress. The OIG found that food costs at meetings exceeded per diem allotments by 200 percent and that LSC used funds to pay travel expenses for its president for business related to her positions with outside organizations. The OIG also found that LSC hired acting special counsels from grant recipient organizations causing potential conflicts of interest. The special counsels are responsible for providing LSC management with advice on policy while also being employees of organizations that receive LSC grant money. The OIG— based on a complaint from a confidential source—began investigating one acting special counsel’s organization but reported that it had been unable to complete the investigation because the organization had failed to provide documentation required by federal law and LSC grant agreements. Without the presence of a strong ethics committee providing effective oversight in the development, implementation, updating, and training for the code of ethics, the corporation is at increased risk of fraud or other ethical misconduct. The LSC board and its committees do not have charters that establish their purpose and responsibilities. A charter is used to define the committee’s purpose, membership, and members’ oversight duties and responsibilities. LSC has a board resolution that provides descriptions of the committees, but the resolution does not contain the elements of a charter and the resolution has not been updated since it was issued in 1995 for three of the four committees. The fourth committee was established in 2003. Current practice is for boards and their committees to each have a written charter that outlines responsibilities, structure, membership criteria, and processes. Current practice also includes reevaluating the charter periodically to see if it needs updating. A charter benefits the board by providing a foundation and focal point for board activities. In addition, the board’s activities can periodically be checked against the charter to ensure that they continue to conform to the charter and, if necessary, to update the charter. If the board and committees do not have charters with the appropriate descriptions of their purposes and responsibilities, the board is at increased risk that the board’s members will not be effective in carrying out their specific oversight responsibilities. The LSC board does not assess the board or committee performance collectively, or the individual performance of its board members. A board’s self-assessment allows the board to periodically determine whether it is meeting its intended goals and fulfilling its duties and provides information needed by the board to make adjustments to its processes and its oversight of management. Board assessments are common practice for nonprofit corporation boards and a NYSE listing requirement for audit committees of public companies. An assessment can include (1) an overall self-assessment of the entire board, (2) an assessment of the separate board committees, (3) individual board member assessments, or (4) all three. If a board does not assess its performance, it is missing a key opportunity for input from its own members for improving the board’s operations and governance policies. A self-assessment enables the board to identify areas for improvement in the board’s operating procedures, its committee structure, and its governance practices. Many of the issues we explored during the course of this audit could be evaluated through a board self-assessment. In addition, some board members told us that documents are not provided well enough in advance to allow a thorough review of the information prior to the meetings or that board members are not receiving the information that they need to fulfill their duties. Such situations could be identified and addressed by the board in a self-assessment. Without a feedback and assessment mechanism, the board runs the risk of not being aware of issues that need to be addressed to improve the board’s functioning. LSC’s management practices have not kept up with current practices in key areas. Specifically, we found that management has neither conducted a risk assessment nor implemented a risk management program to mitigate identified risks, which should include a comprehensive continuity of operations plan (COOP). Risk assessment programs identify the risks the corporation faces and risk mitigation allows management to implement policies that mitigate the risks. A well-designed and tested comprehensive COOP helps mitigate risks from unexpected incidents that can cause great damage and disruptions to operations. Also, senior management has not conducted an assessment of the organization’s internal controls and has not evaluated the financial reporting standards that should be used for its financial statements. Internal control assessment and monitoring are important because they provide reasonable assurance that internal control failures will be prevented or promptly detected. Financial reporting standards determine how an organization records its financial transactions and presents the financial statements. Without an internal control assessment and financial reporting standards, LSC management does not have adequate assurance that the assets and operations are protected, that funds are being used appropriately, and that related risks are being mitigated. A key role of the board is to oversee management practices in the areas of risk assessment and mitigation, internal control, and financial reporting. Management has not completed a thorough assessment of its internal controls or implemented risk mitigation policies in response to a systematic or formal risk assessment. According to the Standards for Internal Control in the Federal Government, internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Management of public companies is required under the Sarbanes-Oxley Act of 2002 to annually assess and report on the effectiveness of the company’s internal controls over financial reporting. Since fiscal year 2006, management of the 24 CFO Act agencies has also been required by OMB guidance to assess and report on the effectiveness of the agencies’ internal controls over financial reporting and compliance with laws and regulations as part of an overall internal control assurance process. As noted earlier, 31 U.S.C. § 3152(c),(d), or FMFIA, required federal agencies to establish internal accounting and administrative control. Assessing and reporting on the effectiveness of internal controls over financial reporting has become an accepted practice among nonprofit corporations. Internal control is an integral component of an organization’s management that provides reasonable assurance that the following objectives are being achieved: effectiveness and efficiency of operations, reliability of financial reporting, and compliance with applicable laws and regulations. Internal controls serve as the first line of defense in safeguarding assets and preventing and detecting errors and fraud. The following are the five standards of internal control, which define elements of internal control and provide the basis against which internal control is to be evaluated. Control environment. Management and employees should establish and maintain an environment throughout the organization that sets a positive and supporting attitude toward internal control. Risk assessment. Internal control should provide for an assessment of the risks the entity faces from both external and internal sources. Control activities. Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the entity’s control objectives. Information and communication. Information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Monitoring. Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are properly resolved. The chief executive officer generally has primary responsibility for risk assessment and risk management under the direction of the board of directors. A risk assessment process includes such areas as operations, compliance, and financial reporting, in which management comprehensively identifies risks, and considers significant interactions between the entity and external parties as well as internal risks at both the entitywide and activity level. Risk assessment is also an integral part of the Committee of Sponsoring Organizations of the Treadway Commission internal control framework and an entity’s effective implementation of internal controls. All entities, regardless of size, structure, nature, or industry, encounter risks at all levels within their organizations. Through the risk assessment process, management determines how much risk is to be prudently accepted and strives to maintain risk within these levels. Auditing standards that became effective on or after December 15, 2006, cite ineffective oversight of the entity’s financial reporting and internal control by those charged with governance, as well as an ineffective control environment, as indicators of control deficiencies and strong indicators of material weaknesses in internal control. The standards include the following examples of deficiencies in the design of controls that may be control deficiencies, significant deficiencies, or material weaknesses that would be reported by the auditor: (1) inadequate documentation of the components of internal control, (2) inadequate design of monitoring controls used to assess the design and operating effectiveness of the entity’s internal control over time, and (3) the absence of an internal process to report deficiencies in internal control to management on a timely basis. According to LSC management, it relies on a cycle memorandum prepared by LSC’s external auditor as management’s assessment of internal controls. However, the cycle memorandum contains process descriptions and does not identify internal controls, their objectives, or the assertions (completeness, rights and obligations, valuation, existence, and presentation and disclosure) that the controls are intended to ensure and the risks that need to be addressed through controls. LSC’s Treasurer/Controller told us that LSC management has not conducted its own formal assessment of internal controls. The Treasurer does conduct ongoing, informal assessments of certain financial processes on an ad hoc basis. However, these assessments are not utilized as part of a comprehensive internal control evaluation. Without comprehensive internal control assessment and monitoring, LSC is at risk that it will not prevent or promptly detect internal control failures, including unauthorized or improper use of federal funds or violations of laws or regulations in its operations. LSC currently does not have a code of conduct that establishes a conflict- of-interest or ethics policy for its employees. A conflict-of-interest policy is intended to help ensure that when actual or potential conflicts of interest arise, the organization has a process in place under which the affected individual will recognize the potential conflict and advise management or the governing body about the relevant facts so that potential conflicts of interest can be resolved. Ethics provisions in the LSC Act and elaborated on in the LSC bylaws (§ 3.05) pertain only to the outside interests of the Board of Directors. LSC bylaws give the board authority to adopt rules and regulations regarding the conduct of officers and employees in matters of any adverse interest to LSC. At the time of our review, the only conflict-of- interest policy affecting employees was a prohibition against gifts, fees, and honoraria greater than $50. LSC policy also states that officers of the corporation must have any outside compensation approved by the board. Federal employees are subject to various statutes and regulations that govern ethical conduct, including public financial disclosure requirements and outside earned income and activities limitations under the Ethics in Government Act of 1978, as amended, and restrictions on gifts to federal employees and acceptance of travel and related expenses from nonfederal sources enacted by the Ethics Reform Act of 1989. The Office of Government Ethics provides leadership for executive branch agencies and departments to prevent conflicts of interest on the part of government employees and to resolve conflicts that do arise. The NYSE and the other stock exchanges have adopted corporate governance requirements to aid their listed companies in complying with ethics requirements contained in the Sarbanes-Oxley Act of 2002. NYSE- listed companies must adopt codes of business conduct and ethics for directors, officers, and employees, and post the codes on their Web sites. Under the Sarbanes-Oxley Act and the related implementation guidance, codes of conduct and ethics should address conflicts of interest, confidentiality, protection and proper use of an organization’s assets, and compliance with laws and regulations, and encourage reporting of illegal or unethical behavior. The American Bar Association (ABA) encourages nonprofit organizations to adopt similar policies. During the LSC operations and regulations committee meeting in April 2007, a board member suggested that a future agenda item should be development of a compliance program that includes a code of conduct. Without such a program that includes conflict-of-interest and ethics policies, LSC is at risk of personnel being unaware of their responsibility in the area of ethics and conflicts of interest, including incidents of illegal or unethical behavior occurring and not being detected. Although LSC does have a COOP, the plan is not complete or comprehensive. It is the policy of the U.S. government for each agency to have in place a comprehensive and effective program to ensure the continuity of essential federal functions under all circumstances. Today’s changing threat environment and the potential for no-notice emergencies, including localized acts of nature, accidents, technological emergencies, and terrorist attacks, have increased the need for COOP capabilities. In this environment, preparing for disasters is an integral part of mitigating risk. Federal Preparedness Circular No. 65 identifies the required characteristics of an effective COOP program, which includes maintaining and testing plans for responding to likely catastrophic events. LSC’s Office of Information Technology (OIT) does perform a full, weekly backup of data and an incremental daily backup. At the end of each month, the most recent full weekly backup is stored off site; the most recent 12 months are retained. According to LSC’s current COOP description provided by LSC, OIT would need to relocate its systems to a remote location should the LSC building not be accessible. Also, from this description, it appears that system hardware first needs to be retrieved from the LSC building and then transported and installed in another location. However, there is no specific implementation plan or remote location specified in the plan. LSC provided us with meeting agendas from May 2006 and June 2006 regarding emergency responses, but did not provide any additional COOP program information. Furthermore, there is no indication that OIT conducted any simulations of disruptions to test its established plans. An organization that does not have a tested, comprehensive COOP is vulnerable to unexpected incidents capable of causing great damage. Finally, because LSC does not have a comprehensive risk assessment process, management and the board have not assessed the risks or identified the acceptable levels of risk associated with LSC’s current COOP. LSC’s management has not conducted its own assessment or analysis to determine which set of accounting standards—those promulgated by the Financial Accounting Standards Board (FASB), Government Accounting Standards Board (GASB), or Federal Accounting Standards Advisory Board—are most applicable for LSC to use. The accounting standards that an entity uses determine how the entity records its financial transactions and how the entity presents the financial statements. According to LSC management, in the mid-1990s, the former IG determined that LSC’s financial reporting should follow the standards issued by GASB, which establishes standards of financial accounting and reporting for state and local governmental entities. However, management, not the OIG, is responsible for the financial statements and for adopting the related accounting policies and for maintaining an adequate and effective system of accounts that will, among other things, help ensure the production of proper financial statements. In response to our inquiries about LSC’s selection and use of those standards in its accounting and preparation of its financial statements, neither LSC management nor the current IG were able to provide us with an analysis or the primary technical reasons why LSC is currently using GASB standards, which are normally intended for use by state and local governments. During the April 2007 meeting of the finance committee, a discussion was held on whether the corporation should be using GASB or FASB standards for its accounting. The Treasurer informed the committee members that his current opinion was that LSC should be using the FASB standard, instead of GASB. It was agreed that further discussion would take place between the Treasurer and OIG staff and that the committee would receive an update at the next committee meeting in July 2007. In recent years, governance and accountability processes have received increased scrutiny and emphasis in the nonprofit, federal agency, and public company sectors as a result of governance and accountability breakdowns, most notably in the public company financial scandals that led to the enactment of the Sarbanes-Oxley Act of 2002. Public companies now operate under strengthened governance and accountability standards, including requirements for ethics policies and improved internal controls. The federal government and nonprofit sectors have followed this lead and established new standards and requirements for improved internal control reporting and governance and accountability. For nonprofit corporations using funding from taxpayers and donors, effective governance, accountability, and internal control are key to maintaining trust and credibility. Governance and accountability breakdowns result in a lack of trust from donors, grantors, and appropriators, which could ultimately put funding and the organization’s credibility at risk. Since its inception over 30 years ago, LSC’s governance and accountability requirements, including its financial reporting and internal control, have not changed significantly. Further, LSC’s board and management have not kept pace with evolving governance and accountability practices. As a result, LSC’s current practices have fallen behind those of federal agencies, U.S. government corporations, and other nonprofit corporations. The current accepted practices of federal agencies, U.S. government corporations, and nonprofit corporations provide a framework for identifying standards that can most effectively be used for strengthening LSC’s governance and accountability. Effectively utilized, current, accepted governance and accountability practices are necessary to provide strong board oversight and effective day-to-day management of LSC’s performance. In addition, NYSE listing standards and the Conference Board provide widely accepted governance standards that can be applied to public companies and nonprofit corporations to improve governance structures and practices. Because LSC’s board and management have not kept pace with the modernization of practices in federal entities and other nonprofit corporations, many opportunities exist to improve and modernize existing processes. By updating and strengthening its governance and accountability structures, LSC can increase assurance that federal funds are being properly spent and its operations are effectively carried out to meet its mission. Since the LSC Act was enacted in 1974 and last comprehensively amended and reauthorized in 1977, new laws governing federal agencies, U.S. government corporations, and public companies have been enacted to strengthen governance and accountability requirements. Therefore, Congress should consider whether LSC could benefit from additional legislatively mandated governance and accountability requirements, such as financial reporting and internal control requirements, modeled after what has worked successfully at federal agencies or U.S. government corporations. There are different options available to Congress for such a mandate. Congress could maintain LSC’s current organizational structure as a federally chartered and federally funded, private, nonmembership, and tax-exempt D.C. nonprofit corporation and enact permanent legislation to require LSC to implement additional governance and accountability requirements. Alternatively, Congress could enact legislation to convert LSC to a federal entity (such as a U.S. government corporation subject to the Government Corporation Control Act) or an independent federal agency that is required to follow the same laws and regulations as executive branch agencies. In the statute establishing LSC as a federal entity, Congress could specifically exempt LSC from certain requirements that would otherwise apply to that type of federal entity in order to further special policy considerations particular to LSC. Through our evaluation of LSC’s governance and accountability practices, we identified opportunities for the LSC board and management to improve their current governance and accountability practices. In order to improve and modernize the governance processes and structure of LSC, we recommend that the LSC Board of Directors take the following eight actions: establish and implement a comprehensive orientation program for new board members to include key topics such as fiduciary duties, IRS requirements, and interpretation of the financial statements; develop a plan for providing a regular training program for board members that includes providing updates or changes in LSC’s operating environment and relevant governance and accountability practices; establish an audit committee function to provide oversight to LSC’s financial reporting and audit processes either through creating a separate audit committee or by rewriting the charter of its finance committee; establish a compensation committee function to oversee compensation matters involving LSC officers and overall compensation structure either through creating a separate compensation committee or by rewriting the charter of its annual performance review committee; establish charters for the Board of Directors and all existing and any newly developed committees to clearly establish committees’ purposes, duties, and responsibilities; implement a periodic self-assessment of the board’s, the committees’, and each individual member’s performance for purposes of evaluating whether improvements can be made to the board’s structure and processes; develop and implement procedures to periodically evaluate key management processes, including at a minimum, processes for risk assessment and mitigation, internal control, and financial reporting; and establish a shorter time frame (e.g., 60 days) for issuing LSC’s audited financial statements. In order to improve and modernize key management processes at LSC, the president and executive committee should take the following four actions: conduct and document a risk assessment and implement a corresponding risk management program as part of a comprehensive evaluation of internal control; with the board’s oversight, evaluate and document relevant requirements of the Sarbanes-Oxley Act of 2002 and practices of NYSE and ABA that are used to establish a comprehensive code of conduct, including ethics and conflict-of-interest policies and procedures for employees and officers of the corporation; establish a comprehensive and effective COOP program, including conducting a simulation to test the established program; and conduct an evaluation to determine whether GASB should be adopted as a financial reporting standard for LSC’s annual financial statements. We provided copies of the draft report to LSC’s Board of Directors and management for comment prior to finalizing the report. We received written comment letters from the Chairman on behalf of LSC’s Board of Directors and LSC’s President on behalf of LSC’s management (see apps. V and VI). Both the Chairman and President expressed their commitment to achieving strong governance and accountability and outlined the actions that LSC’s board and management plan to take in response to our recommendations. LSC management provided technical comments that were incorporated into the report as appropriate. The Chairman of LSC’s board expressed the board’s agreement to take action to address each of the recommendations we made to the board. LSC’s president on behalf of management provided a comment letter where management fully agreed with our recommendations dealing with financial reporting standards, COOP, and code of conduct, and expressed commitment to further action “in the spirit of” our recommendation dealing with conducting and documenting a risk assessment and implementing a corresponding risk management program as part of a comprehensive evaluation of internal control. LSC’s President also included some clarifications to our draft report. First, LSC management stated that “the draft report does not address the existence of congressional oversight,” and provided additional context regarding LSC’s congressional oversight. Our draft report included a discussion of congressional oversight through LSC’s budget process and the appropriations process. In our final report, we included a broader description of LSC’s congressional oversight. Second, LSC management points out that LSC provides certain whistleblower protection statements in its employee handbook regarding communicating with the OIG. We added language to our final report to reflect the existence of such protection under the IG Act. Third, the LSC President stated that the OIG did not find conflicts of interest related to the acting special counsel and was troubled by the references in our report to potential conflicts of interest. In our report, we included information about the IG’s finding that LSC’s hiring of acting special counsels from grantee organizations represented a potential conflict of interest. Our report also noted that the board currently does not have an ethics committee and there is no code of conduct for LSC employees. Both LSC’s Chairman and President commented on the matter that we presented for congressional consideration—that Congress should consider whether LSC could benefit from additional legislatively mandated governance and accountability requirements. In addition, in their respective letters, LSC’s Chairman and President both provided their views that LSC’s governing statutes are appropriate and have worked well and stated that many of the governance recommendations could be accomplished without changing the statutory framework of LSC. As we noted, Congress chartered LSC over 30 years ago as a private corporation for certain policy reasons with governance and accountability requirements that existed at that time as a unique private corporation in response to certain policy considerations. While federal agencies and government corporations have been subject to strengthened governance and accountability requirements over recent years, LSC has not kept up with evolving reforms aimed at strengthening internal control over an organization’s financial reporting process and systems, with LSC’s board’s practices falling short of modern board practices and LSC not keeping up with current management practices. Therefore, we presented the options of amending LSC’s governing statutes to improve governance and accountability requirements or converting LSC to a federal entity, which would include compliance with related governance and accountability requirements. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies to other appropriate congressional committees, the president of LSC, and the LSC Board of Directors. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9471 or franzelj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Although low-income people since the 19th century had been turning to local legal aid societies throughout the United States for assistance with their civil legal problems, in the 1960s President Lyndon B. Johnson declared poverty to be a national problem and initiated a “War on Poverty” to make federal resources available to support local antipoverty programs, such as the legal assistance provided by legal aid societies. The first War on Poverty legislation, the Economic Opportunity Act of 1964, established the now-defunct Office of Economic Opportunity (OEO) within the Executive Office of the President to administer the War on Poverty programs, including what would become the Legal Services Program, the predecessor to the current Legal Services Corporation (LSC). The OEO’s Legal Services Program activities soon generated political controversy, and by the early 1970s there was a general consensus that the OEO’s Legal Services Program should be moved out of the Executive Office of the President. A number of different structures were proposed. For example, there were proposals to move the Legal Services Program into an executive department, such as the Department of Justice, the Department of Housing and Urban Development, or the predecessor to the current Department of Health and Human Services. In addition to raising concerns about political interference, critics of placing the function in an executive department raised concerns about decreased program visibility, reduced responsiveness to client needs, and the objectives of the program being subordinated to the department’s mission. Another proposed organizational home was the Judiciary, especially the Administrative Office of the United States Courts, but critics argued that the Judiciary was already overburdened with work and faced frequent funding problems. Four alternative organizational structures were suggested that took into consideration accountability to Congress and the public while promoting political independence, permanence, program stability, operational flexibility, and attorney independence to represent clients consistent with high professional standards. The four alternative organizational structures proposed were a federal block grant program, an independent agency in the executive branch, a U.S. government corporation, or a private nonprofit corporation. Examples of such organizations today include, respectively, (1) the Temporary Assistance for Needy Families Program and the Community Development Block Grant Program, (2) the National Science Foundation and the National Foundation on the Arts and the Humanities, (3) the Millennium Challenge Corporation and the Corporation for National and Community Service (Americorps), and (4) the Corporation for Public Broadcasting (CPB). Ultimately, consensus in the early 1970s coalesced around an entity modeled after the CPB, which was a private, nonmembership, nonprofit corporation in the District of Columbia with federal funding that was federally chartered by the Public Broadcasting Act of 1967 to “facilitate the development of public telecommunications and to afford maximum protection from extraneous interference and control.” The CPB federal charter created a nine-member, bipartisan board of directors that is appointed by the President of the United States with the advice and consent of the U.S. Senate. The board manages CPB to accomplish its primary mission of providing federal funding via grants and contracts to public telecommunications and production entities in order to promote the expansion and development of public telecommunications with high- quality, diverse programming responsive to local needs and furthering instructional, educational, and cultural purposes. CPB, which was last reauthorized in 1992, is also funded through annual appropriations. By transferring the Legal Services Program to a federally funded, private nonprofit corporation modeled after CPB, supporters of this type of organizational entity hoped to achieve the goal of greater operational flexibility and protection from political pressure from all levels of government while retaining accountability to Congress and the public. Supporters also hoped to encourage private donations to LSC, so unlike CPB’s federal charter, the Legal Services Corporation Act of 1974 (LSC Act) provides that LSC shall be eligible to be treated as a charitable corporation exempt from federal taxation. Under the Internal Revenue Code, tax-exempt status basically means that the corporation is operated and organized exclusively for charitable purposes, does not attempt to influence legislation, does not campaign on behalf of candidates for public office, and does not allow any of its net inure earnings to inure to the benefit of any individual. To maintain tax- exempt status, organizations must annually file with the Internal Revenue Service (IRS) a Form 990, Return of Organization Exempt From Income Tax, which is available for public inspection and includes such information as the organization’s gross income, assets and liabilities, and compensation paid to high-level managers. A number of the provisions in the LSC Act are consistent with IRS’s requirements for tax-exempt status. For example, the LSC Act’s purpose of providing civil legal assistance to low-income people qualifies as charitable, and the LSC Act prohibits LSC from engaging in certain political activities, such as activities that would influence the passage or defeat of any legislation at the local, state, or federal level or from making LSC resources available to support any political party or campaign of any candidate for public office. The LSC Act also states that LSC has no power to issue stocks and prohibits any LSC income or assets from inuring to the benefit of any director, officer, or employee, except as reasonable compensation for services or reimbursement for expenses. By making and keeping LSC a tax-exempt organization, the LSC Act prevents federal tax dollars from being spent on paying federal taxes and thus permits LSC to use its funds for the charitable purpose set out in the LSC Act. Congress enacted the LSC Act in 1974 to transfer the functions of the Legal Services Program from the Executive Office of the President into a private, nonmembership, nonprofit corporation with tax-exempt status that would be federally chartered in the District of Columbia and be authorized to receive annual federal appropriations to fund its operations supporting civil legal assistance to low-income people in communities throughout the United States. In carrying out their functions, corporate directors must fulfill fiduciary duties of care, loyalty, and good faith. Boards may delegate the day-to-day management of the company to the chief executive officer (CEO) and other senior management, but board members retain responsibilities for oversight and monitoring of any delegated functions. Under state corporate law, directors owe fiduciary duties to the corporation and its shareholders: the duty of care, which is the duty to exercise appropriate diligence and make decisions that are informed; the duty of loyalty, which is the duty to act without conflict and always put the interests of the corporation before those of the individual director or other individuals or organizations the individual director is affiliated with; and the duty to act in good faith, which is the duty to act with honesty of purpose and in accordance with evolving corporate governance best practices. A strong and effective board of directors should have a clear view of its role in relationship to management. How the board organizes itself and structures its processes will vary with the nature of the business, business strategy, the size and maturity of the company, and the talents and personalities of the CEO and directors. Circumstances particular to the corporate culture may also influence the board’s role. The board focuses principally on guidance and strategic issues, the selection of the CEO and other senior executives, risk oversight and performance assessment, and adherence to legal requirements. Management implements the business strategy and runs the company’s day-to-day operations with the goal of increasing shareholder value for the long term. The board should have a set of written guidelines in place to articulate corporate governance principles and the roles and responsibilities of the board and management. These guidelines should be reviewed at least annually. By elaborating on directors’ basic duties, the guidelines help the board and its individual members understand their obligations as well as the general boundaries within which they should operate. The effectiveness of the board ultimately depends on the quality and timeliness of information received by directors. Each board and management should agree on the type of information the board needs to make informed decisions and perform its oversight function. This should include material on business and financial performance, strategic issues, and information about material risks and other significant matter facing the company. Information for board meetings should be distributed enough in advance of the meetings to permit directors to read, absorb, and consider it. Besides formal processes, board and management should develop informal communication and reporting channels. Boards should consider the following best practices to help ensure effective decision making and exchange of information and ideas at meetings of the full board or its committees: Independent directors should be able to place issues on the board agenda, with time for adequate discussion and consideration, and determine the type and quality of information flow required for effective board action. Last minute add-ons to the agenda, especially for weighty issues, should be discouraged. The lead/presiding director, if there is one, should take responsibility to surfacing issues that impact the business and need to be presented to the board for discussion and/or action, whether in regular or executive sessions. Management should provide information that effectively explains the company’s operating and financial status, as well as other significant issues facing the company and the board. Appropriate feedback mechanisms between management and the board should be developed to ensure that the materials are useful, timely, and of adequate depth. Meeting materials should contain a cover letter highlighting the most important issues for directors’ consideration. Meetings should be structured to encourage participation and dialogue among the directors. Directors have an obligation to ensure near-perfect attendance at meetings and actively participate in the meetings, including asking the hard questions. The CEO should expose directors to senior management team members and operation (line) management at meetings and field trips so that directors can, with knowledge informally acquired from management, further delve into issues necessary to carry out their functions. According to New York Stock Exchange (NYSE) rules, executive sessions should 1. be held without management present; 2. be regularly scheduled to prevent negative inferences; 3. disclose the name of the director presiding at the executive sessions, if one is chosen, in the annual proxy statement or the procedure by which the director presiding at meetings is selected; and 4. disclose mechanisms for interested parties to make their concerns known to the nonmanagement directors as a group. NASDAQ’s rules require regularly convened executive sessions of the independent directors. In addition, according to best practices identified by the Conference Board Directors’ Institute, executive sessions should promote open dialogue among the independent members and free exchange of ideas, perspectives, and information; have a feedback mechanism to the CEO for important issues that may surface (the lead or presiding director can take the lead in providing the CEO feedback); be scheduled at regular intervals (most commonly following each full board meeting, even though some boards may also hold a short pre-meeting executive session) to eliminate any negative inferences from convening these sessions; and be supplemented by additional off-line informational channels (such as dinners before board meetings) to help build trust and relationships among the independent directors. An independent, vigorous, and diligent board of directors is crucial to good corporate governance. Boards must move from their traditional advisory roles to become active fiduciaries in the exercise of their oversight responsibilities. From this standpoint, independence is essential. Although defined by legislative and regulatory standards, a director’s independence (in thought and action) from management influence should always be evaluated qualitatively and on a case-by-case basis. For the past few years, issuers have been required to disclose information in Securities and Exchange Commission filings regarding director independence and other corporate governance matters. The commission has recently consolidated these requirements under new Item 407 of Regulation S-K. Registrants must disclose information about director independence; nominating, audit, and compensation committees; and shareholder communications by the following means: Identifying each independent director of the company (and the nominees for director when the information is being presented in a proxy or information statements) as measured by the company’s definition of independence. Identifying any members of the compensation, nominating, and audit committees whom the company has not identified as independent under such definition. Describing, by specific category or type, any related party transactions, relationships, or arrangements not disclosed pursuant to Item 404 that were part of the board of directors’ consideration in determining that the independence standard has been met as to each independent director or director nominee. Providing the number of board meetings during the fiscal year and certain attendance information, including the board’s policy on attendance at annual shareholder meetings and attendance information with respect to the last annual meeting. Identifying any standing audit, nominating, and compensation committees; their membership composition; and the number of meetings, together with certain descriptive information regarding such committees. Disclosing information about the audit committee’s independence and expertise, and about the process for shareholders to send communications to the registrant’s board of directors. If there is no process, the basis for the board’s view that it is appropriate not to have such a process and, if all shareholder communications are not sent directly to board members, a description of the process for determining which communications will be provided to board members. The composition and skill set of a board should be linked to the company’s particular challenges and strategic vision. As companies develop and experience changed circumstances, the desired composition of the board may be different and should be reviewed. The composition of the board should be tailored to meet the needs of the company and its stage of development. There should be a mix of director knowledge and expertise in strategic and business planning, industry knowledge. As with any group working together, boardroom relationships are difficult to predict, but an effective director asks the hard questions, works well with others, is available when needed, is alert and inquisitive, contributes to committee work, challenges management’s assumptions when needed and speaks out appropriately at board meeting, makes contributions to long-range planning, and provides an overall contribution to the board and committees on which he or she serves. According to the 2006 edition of the annual Directors’ Compensation and Board Practices report by the Conference Board, the median board size, depending on the industry, ranges from 9 to 11 members. The median number for outside directors varies from 8 to 10. The 2007 edition of Board Practices/Board Pay report noted that 72 percent of Standards & Poor’s 1,500 companies had 9- member boards in 2005, down from 12 in 2003. Boards need to be large enough to accommodate the necessary skill sets, but still small enough to promote cohesion, flexibility, and effective participation. “When you’ve got a 20- or 30-person corporate board,” argued one member of the Conference Board Directors’ Institute, “it’s one way of ensuring that nothing is ever going to happen that the CEO doesn’t want to happen. If you’ve got a small board—8 to 10 people—people do get involved.” The NYSE requires that a list of director qualification standards be included in the company’s corporate governance guidelines. These standards should, at a minimum, reflect the NYSE independence requirements. Companies may also address other substantive qualifications requirements, including policies limiting the number of boards on which a director may sit and specifying director tenure, retirement, and succession criteria. All directors must devote the proper amount of time and attention to develop the broad-based and specific knowledge required to fulfill their obligations. In order to ensure a high level of commitment, directors should assess carefully and guard against potential entanglements, such as service on an excessive number of boards; prepare for and attend all board and committee meetings and consider travel requirements for these meetings (in particular for foreign-based directors); participate actively and effectively at meetings; develop and maintain a high level of knowledge about the company’s business; keep current in the director’s own specific field of expertise; and develop broad knowledge about the role and responsibilities of directors, including legal responsibilities. Boards should adopt a structure providing nonmanagement directors with the leadership necessary for them to act independently and perform effectively. This structure could include separating the positions of chairman and CEO; creating a lead independent director; or in case of a former employee acting as chairman, appointing a presiding director from among the independent directors. Any structural alternative a board wishes to adopt should strengthen the independence and oversight role of the board, provide the nonmanagement directors with the ultimate authority over information flow to the board, and improve the relationship and flow of information between the board, CEO, and senior management. Boards should establish committees (e.g., nominating/governance, audit, compensation) that will enhance the overall effectiveness of the board by ensuring focus on and oversight of matters of particular concern. Statutory law, SEC rules, and stock exchange listing standards require that committees must be composed solely of directors who meet specified independence standards. An effective committee structure should require that each committee have a charter delineating the committee’s jurisdiction, duties, and responsibilities; each charter include only duties that can actually be accomplished; and each charter be reviewed at least annually. Hiring the CEO and planning for CEO succession are two of the most important responsibilities of the board. The board should institute a CEO succession plan and selection process overseen by one of its independent committees or by a designated director or group of directors. A successful succession planning process will be driven and controlled by the board, involve inputs from the CEO and other key employees, be easily executed in the event of a crisis, be tied to the corporate strategy, be geared toward finding the right leader at the right time, develop talent pools throughout the managerial ranks of the company, and avoid a “horse race” mentality that may lead to the loss of key officers when the new CEO is chosen. LSC is subject to grants management requirements that are stronger than those of other Washington, D.C., nonprofit corporations, but somewhat less rigorous than those governing federal entities, including requirements related to the grantor’s audits of grant recipients, administration of grants, and application of cost principles to grants. (See table 6.) In 1996, Congress amended the LSC Act on a fiscal year basis through certain administrative provisions included in the fiscal year 1996 appropriations act for LSC (LSC 1996 Amendments). The LSC Act requires the LSC board to ensure that each grant recipient is subject to an annual financial audit and to maintain a copy of that audit report at its headquarters for at least 5 years. The LSC 1996 Amendments added additional requirements related to grant recipient audits. The LSC 1996 Amendments require the grant recipient audit to be conducted in accordance with generally accepted government auditing standards (GAGAS) and guidance established by the LSC Office of Inspector General (OIG). The grant recipient audit report must state whether (1) the grant recipient’s financial statements fairly present its financial position and results of operations in accordance with generally accepted accounting principles (GAAP); (2) the grant recipient has internal control systems that provide reasonable assurance that it is managing its funds, LSC and otherwise, in compliance with federal laws and regulations; and (3) that the grant recipient has complied with federal laws and regulations applicable to funds received from LSC or other sources. The LSC 1996 Amendments include other grant management provisions. For example, the LSC 1996 Amendments require the board to select LSC grant recipients through the implementation of a system of competitive awards, including such selection criteria as (1) the demonstration of an understanding of client legal needs and capability of serving such needs; (2) the quality, feasibility, and cost-effectiveness of the proposed plan for delivery of legal assistance; and (3) LSC’s past experience with the applicant, including the record of past compliance with LSC requirements. The LSC 1996 Amendments require the board to ensure that no grant recipient uses LSC funds for any litigation activity in providing client legal services unless certain recordkeeping requirements are met. For all cases or matters, the LSC 1996 Amendments require the board to obtain the grant recipient’s agreement to maintain timekeeping records. Additionally, the LSC 1996 Amendments require the board, before providing funding to a grant recipient, to ensure that the grant recipient enters into a contractual agreement to be subject to all federal laws relating to the proper use of federal funds (i.e., not using federal funds for fraud, waste, or abuse) and that for such purposes LSC shall be considered a federal agency and its grant funds shall be considered federal funds. Finally, LSC has issued regulations on its administration of grants, including provisions establishing cost standards and procedures. Requirements for audits of grants provided by federal agencies and U.S. government corporations are found in the Single Audit Act, as amended, which established uniform audit requirements for state and local governments and nonprofit organizations that receive grants or other forms of federal financial assistance. In addition to uniform audit requirements, the Single Audit Act is intended to “promote sound financial management, including effective internal controls, with respect to Federal awards administered by non-Federal entities” and “promote the efficient and effective use of audit resources.” The Office of Management and Budget (OMB) has issued implementing regulations on the Single Audit Act in OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations (rev. June 27, 2003). Under the Single Audit Act and implementing regulations, generally grant recipients must annually arrange for an independent auditor to conduct an audit in accordance with GAGAS and prepare a report on the grant recipient’s financial statements and schedule of expenditures, internal controls, and compliance with laws and regulations. The auditor must report whether (1) the financial statements are presented fairly in all material respects in conformity with GAAP and (2) the schedule of expenditures of the grants is presented fairly in all material respects in relation to the financial statements taken as a whole. With respect to internal controls, the auditor must obtain an understanding of each of the grant recipient’s major programs, assess control risk, and perform tests of the controls. The auditor must also determine whether the grant recipient has complied with the provisions of laws, regulations, and contracts or grants related to the grant that have a direct and material effect on each major program. The Single Audit Act requires each grantor federal entity to assess the quality of such audits and monitor the grant recipient’s use of the federal funds received pursuant to the grant. The Single Audit Act also requires any auditor of a grant recipient to provide access to the auditor’s workpapers in response to a request from the grantor federal entity or the Comptroller General as part of either’s activities in furthering their oversight responsibilities. In addition to providing guidance on audits of grant recipients of federal entities, OMB uses the authority it possesses under a number of statutes to issue guidance on uniform administrative requirements for federal grants that each federal agency and U.S. government corporation must implement by promulgating entity-specific regulations. OMB has issued two different circulars for grants to different types of entities: OMB Circular No. A-102 applies to grants to state and local governments and OMB Circular No. A-110 applies to grants to institutions of higher education, hospitals, and other nonprofit organizations. These circulars provide for the use of common forms, such as applications, and common standards, such as grant recipient financial reporting, socioeconomic policies, and grantor monitoring and oversight responsibilities. OMB has also issued guidance providing cost principles for federal entities to use in administering their grants. In three separate circulars, OMB sets out principles to determine the applicability of costs incurred by three groups of entities to federal grants. OMB Circular No. A-87 establishes cost principles for state, local, and tribal governments; whereas OMB Circular Nos. A-21 and A-122 establish such principles, respectively, for institutions of higher education and nonprofit organizations. Unlike most independent federal agencies and wholly owned government corporations, LSC is not subject to a wide range of federal laws and regulations that govern the acquisition and management of property and services, such as the Federal Acquisition Regulation (FAR) or the Federal Travel Regulation (FTR). (See table 7.) As a D.C. nonprofit corporation, LSC has few limitations on its acquisition, management, disposition, and contract activities in relation to real and personal property and services. Under the D.C. Nonprofit Corporation Act, it can acquire any interest in real or personal property by purchase, gift, lease, or contract and then “own, hold, improve, use and otherwise deal in and with” such property. LSC can also dispose of any property interest through sale, mortgage, lease, exchange, transfer, or any other suitable method. LSC also has the power to acquire services through making contracts and incurring liabilities. In procuring property and services, most independent federal agencies and wholly owned U.S. government corporations are subject to a number of laws and regulations, including the Public Buildings Act of 1959, the Federal Property and Administrative Services Act of 1949, the Office of Federal Procurement Policy Act, the Competition in Contracting Act of 1984, the FAR, and the Federal Management Regulation (FMR). These laws and regulations set out authorities, requirements, and standards for most independent federal agencies and U.S. government corporations to manage their acquisition and property systems. Information technology and travel services are important types of property and services that federal and nonprofit entities need to acquire. Federal agencies and wholly owned U.S. government corporations, but not LSC, are subject to federal governmentwide management laws in these areas. The Clinger-Cohen Act of 1996, governs information technology acquisitions by federal agencies and wholly owned U.S. government corporations, requiring, among other things, the design and implementation of a process for maximizing the value, and assessing and managing the risks of the entity’s information technology acquisitions, as well as the creation of a chief information officer position to help manage this process. Federal agencies and wholly owned U.S. government corporations, but not LSC, are also subject to statutory requirements for travel by federal civilian employees, as well as the implementing the FTR, promulgated by the General Services Administration, which are intended to regulate travel “in a manner that balances the need to assure that official travel is conducted in a responsible manner with the need to minimize administrative costs.” For example, the FTR provides rules on when government employees may use first-class or business-class airline accommodations. Under the D.C. Nonprofit Corporation Act, the LSC board possesses broad powers in relation to its officers, employees, and other agents with only limited restrictions imposed on this power by the LSC Act and other D.C. statutes. (See table 8.) Unlike federal agencies, LSC is not subject to the laws in the U.S. Code relating to the executive branch workforce. For example, like directors of other private nonprofit, tax-exempt corporations, the LSC directors have the power to determine the rates of compensation of LSC’s officers and employees so long as the compensation is not so high that it might constitute prohibited personal inurement. In one of its few human resources restrictions, however, the LSC Act specifically makes LSC subject to certain laws governing pay and benefits for civilian employees of federal agencies and wholly owned U.S. government corporations. The LSC Act does so by imposing a ceiling on compensation for any LSC officer or employee who is linked to a federal pay schedule under federal law: level V of the Executive Schedule, which in calendar year 2006 was $133,900. The LSC Act also treats LSC as a federal entity for purposes of personnel participation in specified federal employee benefits programs to which LSC is required to make contributions at the same rates applicable to federal employers. Unlike the employees of LSC and other Washington, D.C., nonprofit corporations, employees of federal agencies and, to a limited extent, U.S. government corporations, enjoy certain protections under the Whistleblower Protection Act when they engage in “whistleblowing,” which involves reporting evidence of illegal or improper federal employer activities to the relevant authorities. For example, federal agency and U.S. government corporation supervisors may not take disciplinary action against an employee for disclosing information that the employee reasonably believes evidences gross mismanagement, a gross waste of funds, an abuse of authority, or a substantial and specific danger to public health or safety. There is no equivalent statutory provision for employees of Washington, D.C., nonprofit corporations, such as LSC or CPB. Under Washington, D.C., law, however, if a D.C. nonprofit corporation terminates an employee because he or she disclosed information of employer misconduct, such as illegal activities, then the terminated employee can sue the corporation for wrongful discharge under D.C. law’s public policy exception to the at-will employment doctrine that at-will employees can be terminated at any time for any reason. Furthermore, LSC employees, like those of CPB and federal entities subject to the IG Act, enjoy additional protections not available to employees of typical D.C. nonprofit corporations. Under the IG Act the IG must not, without the employee’s consent, disclose the identity of an employee who informs the IG about the possible existence of an activity at LSC constituting a violation of law, rules, or regulations, or mismanagement, gross waste of funds, abuse of authority, or a substantial and specific danger to the public health and safety. The IG Act also prohibits the LSC employee’s manager from retaliating, or threatening to retaliate, against the employee for this communication with the IG, unless the employee provided the information to the IG with knowledge that it was false or with willful disregard for its truth or falsity. Large organizations such as LSC generate print and electronic records and conduct executive meetings as part of their regular course of business. LSC’s statutory requirements for access to information are similar to those of federal entities, but its recordkeeping requirements are not as rigorous. However, LSC’s requirements for access to information and recordkeeping are stronger than those for other Washington, D.C., nonprofit corporations. (See table 9.) The LSC Act imposes some limited recordkeeping requirements on LSC, such as a 3-year retention period for records that support its annual financial audit and a requirement to keep copies of reports on grantees. CPB is subject to a similar 3-year retention period for records supporting its annual financial audit, but other Washington, D.C., nonprofit corporations are subject to only minimal recordkeeping requirements, including keeping correct and complete books and records of account and minutes of board proceedings, which do not have to meet any particular standard. Under the Federal Records Management laws, however, the heads of independent federal agencies and wholly owned U.S. government corporations have much broader recordkeeping duties: the creation of records to document all “essential transactions” and retention of these records for specified time periods depending on the type of transaction documented. For any records that LSC, federal agencies, and U.S. government corporations retain, they must provide the public with access to these records as required by the Freedom of Information Act (FOIA). FOIA requires that federal entities make their records available for public inspection and copying unless one of the listed FOIA exemptions apply, such as the exemptions for records pertaining to medical files, internal personnel practices, or trade secrets. This is one of the handful of provisions in the LSC Act in which the LSC Act provides that LSC shall be treated as a federal agency. There is no comparable public right to access corporate records under the D.C. Nonprofit Corporation Act. While CPB is not subject to FOIA, it does include a records access provision requiring CPB to maintain certain records at its office and to make them available for public inspection and copying. LSC is also subject to the Government in Sunshine Act (Sunshine Act), which means that all board meetings, including meetings of any executive committee of the board, must be open to public observation. In following the Sunshine Act, the LSC board must follow the procedural requirements for providing adequate notice of meetings, as well as for closing all or a portion of a meeting based on discussion of exempted subject matter, such as personnel matters or pending litigation. In this respect, LSC is no different from other entities subject to the Sunshine Act, which are U.S. government corporations and federal agencies headed by a collegial body, and very different from most D.C. nonprofit corporations that are subject to no similar requirement. Although not subject to the Sunshine Act, the CPB board has an open meetings requirement that resembles Sunshine Act requirements. While LSC is not subject to the “notice-and-comment rule making” under the Administrative Procedures Act of 1946 (APA), LSC must provide interested parties with “notice and a reasonable opportunity for comment” on all proposed rules, regulations, and guidelines, and must publish these requirements in the Federal Register at least 30 days prior to their effective date. Federal agencies and U.S. government corporations are subject to similar requirements in APA, whereas D.C. nonprofit corporations have no similar rulemaking requirement for public participation. Funds used only for authorized purposes: Purpose Statute (31 U.S.C. § 1301(a)) Funds used only for authorized purposes: Purpose Statute (31 U.S.C. § 1301(a)) Annual budget: LSC Act (request made directly to Congress: no content or form requirements; OMB comment and review allowed) Annual budget: 31 U.S.C. §§ 1105, 1108 (agency budget submitted to the President for inclusion in the Budget of the U.S. Government) Annual budget: Wholly owned U.S. government corporations: 31 U.S.C. § 9103 (Government Corporation Control Act) LSC Act (report of annual audit of LSC’s accounts) Financial statements and reports: Annual audited financial statements: 31 U.S.C. §§ 9105, 9106 (Government Corporation Control Act) (Chief Financial Officers Act of 1990, Government Management Reform Act of 1994, Accountability of Tax Dollars Act of 2002) Strategic plans: 5 U.S.C. § 306; Performance plans and reports: 31 U.S.C. §§ 1115-1116 (Government Performance and Results Act of 1993) Strategic plans: 5 U.S.C. § 306; Performance plans and reports: 31 U.S.C. §§ 1115-1116 (Government Performance and Results Act of 1993) System of internal control and Assurances: 31 U.S.C. § 3512(c), (d) (Federal Managers’ Financial Integrity Act of 1982) System of internal control and Assurances: 31 U.S.C. § 9106 (Government Corporation Control Act) Employment: Title 5 of the U.S. Code (Most provisions apply to wholly owned U.S. government corporations, but only some provisions apply to mixed-ownership government corporations) Whistleblower protection: Whistleblower Protection Act (certain provisions) Open meetings: Government in the Sunshine Act (if headed by a multiperson body) In addition to the person named above, F. Abe Dymond; Lauren S. Fassler; Joel I. Grossman; Maxine L. Hattery; Stephen R. Lawrence; Kimberley A. McGatlin; and Matthew P. Zaun made key contributions to this report.
The Legal Services Corporation (LSC) was federally created as a private nonprofit corporation to support legal assistance for low-income people to resolve their civil matters and relies heavily on federal appropriations. Due to its unique status, its governance and accountability requirements differ from those of federal entities and nonprofits. This report responds to a congressional request that GAO review LSC board oversight of LSC's operations and whether LSC has sufficient governance and accountability. GAO's report objectives are to (1) compare LSC's framework for corporate governance and accountability to others', (2) evaluate LSC's governance practices, and (3) evaluate LSC's internal control and financial reporting practices. We reviewed the LSC Act, legislative history, relevant standards and requirements, and LSC documentation and accountability requirements and interviewed board and staff. Although LSC has stronger federal accountability requirements than many nonprofit corporations, it is subject to governance and accountability requirements that are weaker than those of independent federal agencies and U.S. government corporations. Congress issued LSC's federal charter over 30 years ago. Established with governance and accountability requirements as they existed at the time, LSC has not kept up with evolving reforms aimed at strengthening internal control over an organization's financial reporting process and systems. Rigorous controls are important for the heavily federally funded LSC. During fiscal year 2007, LSC is responsible for the safeguarding and stewardship of $348.6 million of taxpayer dollars. Although no single set of practices exists for both private and public entities, current accepted practices of federal agencies, government corporations, and nonprofit corporations offer models for strengthening LSC's governance and accountability, including effective board oversight of management; its performance; and its use of federal funds and resources. The board members demonstrated active involvement in LSC through their regular board meeting attendance and participation in LSC oversight. Although LSC's Board of Directors was established with provisions in law that may have supported effective operation over 30 years ago, its practices fall short of modern board practices. The LSC board generally provides each new member an informal orientation to LSC and the board, but it does not have consistent, formal orientation and ongoing training with updates on new developments in governance and accountability standards and practice. The current board has four committees, but none are specifically targeted at providing critical audit, ethics, or compensation functions, which are important governance mechanisms commonly used in corporate governance structures. Because it has not taken advantage of opportunities to incorporate such practices, LSC's Board of Directors is at risk of not being able to fulfill its role of effective governance and oversight. A properly implemented governance and accountability structure may have prevented recent incidents of compensation rates in excess of statutory caps, questionable expenditures, and potential conflicts of interest. LSC also has not kept up with current management practices. Of particular importance are key processes in risk assessment, internal control, and financial reporting. Management has not formally assessed the risks to the safeguarding of its assets and maintaining the effectiveness and efficiency of its operation, nor has it implemented internal controls or other risk mitigation policies. LSC is also at increased risk that conflicts of interest will occur and not be identified because senior management has not established comprehensive policies or procedures regarding ethical issues that are aimed at identifying potential conflicts and taking appropriate actions to prevent them. Finally, management has not performed its own assessment or analysis of accounting standards to determine the most appropriate standards for LSC to follow.
Decennial census data play a key role in the allocation of many grant programs. In fiscal year 2004, the federal government administered 1,172 grant programs, with $460.2 billion in combined obligations. Most of these obligations were concentrated in a small number of grants. For example, Medicaid was the largest formula grant program, with federal obligations of $183.2 billion, or nearly 40 percent of all grant obligations, in fiscal year 2004. Many of the formulas used to allocate grant funds rely upon measures of population, often in combination with other factors. In addition to the census count, the Bureau has programs that estimate more current data on population and population characteristics that are derived from the decennial census of population. Grant formula allocations also use the estimated data from the Bureau’s postcensal population estimates, the Current Population Survey, and the American Community Survey. Because the decennial census provides population counts once every ten years, the Bureau also estimates the population for the years between censuses. These estimates are referred to as postcensal population estimates. They start with the most recently available decennial census data and for each year adjust population counts for births, deaths, and migration. Because these population estimates are more current than the decennial population counts, the distribution formulas for federal grants often use these data. For example, the allocation formula for the Social Services Block Grants uses the most recent postcensal population estimates to distribute funds. While the decennial census and postcensal estimates provide annual data, the Current Population Survey provides monthly data. This survey’s sampling design relies on information developed for the decennial census and its data are revised annually to be consistent with the postcensal estimates. The survey is primarily designed to generate detailed information about the American labor force, such as the number of people unemployed. Data from this survey are also used to allocate funds for programs, for instance programs under the Workforce Investment Act. Another survey, the American Community Survey (ACS), provides detailed socioeconomic characteristics for the nation’s communities. The ACS relies on information developed for the decennial census and its annual data are controlled to be identical to postcensal population estimates. Currently, the ACS provides information on communities with populations over 65,000. Data from the ACS are also used to allocate federal funds, such as determining fair market rent levels used in the Section 8 housing voucher program. Because the ACS is to replace 2010 census long form socioeconomic data, it is expected that ACS data will be used more extensively in other federal assistance programs in the future. Beginning in 2010, 5-year estimates will be available for areas to the smallest block groups, census tracts, small towns, and rural areas. Beyond their use by the federal government, the population counts and estimates are also used extensively by state and local governments, businesses, nonprofits, and research institutions. Population-based data drawn from the decennial census, postcensal population estimates, and the ACS play critical roles in the conduct of community development programs undertaken by the federal, state, and local governments. Such data are central to the conduct of the federal government’s Community Development Block Grant program (CDBG), the federal government’s 13th largest formula grant program with $3 billion in obligations in fiscal year 2004. Since 1974, this program has provided $120 billion to help communities address a host of urban problems ranging from poverty and deteriorating housing to population loss and social isolation. Given the breadth of the program’s objectives and the diversity of the nation’s communities, CDBG employs four formulas to allocate funds among 50 states, the District of Columbia, and 1,080 local governments. These formulas depend on census data, including total population, individuals in poverty, lagging population growth, households in overcrowded homes, as well as the number of pre-1940 homes. An accurate census relies on finding and counting people—only once—in the right place and getting complete, correct information on them. Seeking to obtain an accurate count has been a concern since the first census in 1790. Concern about undercounting the population continued through the decades. In the 1940s, demographers began to obtain a more thorough understanding of the scope and nature of the undercount. For example, the selective service registration of October 1940 showed 2.8 percent more men than the census count. According to the Bureau, operations and programs designed to improve coverage have resulted in the total undercount declining in all but one decade since the 1940s. These measures of coverage are based on demographic analysis, which compares the census count to birth and death certificates and other administrative data (see fig. 1). Modern coverage measurement began with the 1980 Census, when the Bureau compared decennial figures to the results of an independent sample survey of the population. In using statistical methods such as these, the Bureau began to generate detailed measures of the differences among undercounts of particular ethnic, racial and other groups. In 1990, the Bureau relied on a Post-Enumeration Survey to verify the data it collected through the 1990 Census. For this effort, the Bureau interviewed a sample of households several months after the 1990 Census, and compared the results to census questionnaires to determine if each sampled person was correctly counted, missed, or double counted in the Census. The Bureau estimated that the net undercount, which it defined as those missed minus those double counted, came to about 4 million people. To estimate the accuracy of the 2000 Census, the Bureau conducted the Accuracy and Coverage Evaluation (A.C.E.), which was an independent sample survey designed to estimate the number of people that were over- and undercounted in the census, a problem the Bureau refers to as coverage error. This evaluation found that in the 2000 Census there was a net overcount. For 2010 the Bureau plans a census coverage measurement program that will, among other things, produce estimates of components of census net and gross coverage error (the latter includes misses and erroneous enumerations) in order to assess accuracy. The accuracy of state and local population estimates may have an effect, though modest, on the allocation of grant funds among the states. In our June 2006 report, we analyzed how sensitive two federal formula grants are to alternative population estimates, such as those derived by statistical methods. In the June 2006 report, we recalculated certain federal assistance to the states using the A.C.E. population estimates from the 2000 Census, as well as the population estimates derived from the Post- Enumeration Survey, which was administered to evaluate the accuracy of the 1990 Census. This simulation was done for illustrative purposes only— to demonstrate the sensitivity of government programs to alternative population estimates. While only the actual census numbers should be used for official purposes, our simulation shows the extent to which alternative population counts would affect the distribution of selected federal grant funds and can help inform congressional decision making on the design of future censuses. We selected the Social Services Block Grant (SSBG) as part of this simulation because the formula for this block grant program, which is based solely on population, and the resulting funding allocations are particularly sensitive to alternative population estimates. At a given level of appropriation, any changes in the state’s population relative to other states’ changes would have a proportional impact on the allocation of funds to the state. In fiscal year 2004, the federal government allocated $1.7 billion to states in block grant funds under the program. Recalculating these allocations using statistical population estimates from the 2000 A.C.E., only $4.2 million—or 0.25 percent—of $1.7 billion in block grant funds would have shifted. The total $1.7 billion SSBG allocation would not have changed because SSBG receives a fixed annual appropriation. In other words, those states receiving additional funds would have reduced the funds of other states. In short, 27 states and the District of Columbia would have gained $4.2 million and 23 states would have lost a total of $4.2 million. Based on our simulation of the funding formula for this block grant program, the largest percentage changes were for Washington, D.C., which would have gained 2.05 percent (or $67,000) in grant funding and Minnesota which would have lost 1.17 percent (or $344,000). For the programs we examined, less than half of a percent of total funding would be redistributed by using the revised population counts. Figure 2 shows how much (as a percentage) and where SSBG funding in 2004 would have shifted as a result of using statistical population estimates for recalculating formula grant funding by state. We previously reported that using 1990 adjusted data as the basis for allocations had little relative effect on the distribution of annual funding to states. More recently, we reported that statistical population estimates from the 2000 Census would have shifted a smaller percentage of funding compared to those from the 1990 Census because the difference between the actual and estimated population counts was smaller in 2000. For example, using statistical estimates of the population following the 1990 Census, a total of 0.37 percent of SSBG funds would have shifted among the states in fiscal year 1998. In addition to any impact that inaccuracies in the census count may have on allocation of federal funds, between decennials differences between the actual population and population estimates could affect fund allocation. To calculate grant amounts, formula grants generally rely on annual population estimates for each state developed by the Bureau. State populations are estimated by adding to the prior year’s population estimate the number of births and immigrants and subtracting the number of deaths and emigrants. These estimates are subject to error, mainly because migration between states and between the United States and other countries is difficult to measure. By the end of the decade, when the census count is taken, a significant gap may have arisen between the population estimate and the census count. We found that by the time of the 2000 census count, the annual estimates of population differed from the 2000 count by about 2.5 percent. This “error of closure” was substantially larger than that for the 1990 census—0.6 percent. We found that correcting population estimates to reflect the 2000 census count redistributes among states about $380 million in federal grant funding for Medicaid, Foster Care, Adoption Assistance, and SSBG. Most of the shift in funding occurred in fiscal year 2003 when federal matching rates for three of the programs were based on population estimates derived from the 2000 census. For the SSBG program, the shift occurred in 2002 when it began using the 2000 census count. Complete and accurate data from the decennial census are central to our democratic system of government. These same data serve as a foundation for the allocation of billions of dollars in federal funds to states and local governments. Because of the importance of the once-a-decade count, it is essential to ensure that it is accurate. Though the overall undercount has generally declined since it has been measured, evaluating the accuracy of the census continues to be essential given the importance of the data, the need to know the nature of any errors, and the cost of the census overall. We continue to monitor the Bureau’s progress in this important effort. Mr. Chairman, this concludes my remarks. I will be glad to answer any questions that you, Mr. Turner, or other subcommittee members may have. For further information regarding this statement, please contact Mathew Scire, Director, Strategic Issues, on (202) 512-6806 or at sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this statement included Steven Lozano, Assistant Director; Betty Clark; Robert Dinkelmeyer; Greg Dybalski; Ron Fecso; Sonya Phillips; Michael Springer; and Cheri Truett. Federal Assistance: Illustrative Simulations of Using Statistical Population Estimates for Reallocating Certain Federal Funding. GAO- 06-567. Washington, D.C.: June 22, 2006. Data Quality: Improvements to Count Correction Efforts Could Produce More Accurate Census Data. GAO-05-463. Washington, D.C.: June 20, 2005. Census 2000: Design Choices Contributed to Inaccuracy of Coverage Evaluation Estimates. GAO-05-71. Washington, D.C.: November 12, 2004. 2010 Census: Cost and Design Issues Need to Be Addressed Soon. GAO- 04-37. Washington, D.C.: January 15, 2004. Formula Grants: 2000 Census Redistributes Federal Funding Among States. GAO-03-178. Washington, D.C.: February 24, 2003. 2000 Census: Coverage Measurement Programs’ Results, Costs, and Lessons Learned. GAO-03-287. Washington, D.C.: January 29, 2003. 2000 Census: Complete Costs of Coverage Evaluation Programs Are Not Available. GAO-03-41. Washington, D.C.: October 31, 2002. The American Community Survey: Accuracy and Timeliness Issues. GAO-02-956R. Washington, D.C.: September 30, 2002. 2000 Census: Refinements to Full Count Review Program Could Improve Future Data Quality. GAO-02-562. Washington, D.C.: July 3, 2002. 2000 Census: Coverage Evaluation Matching Implemented as Planned, but Census Bureau Should Evaluate Lessons Learned. GAO-02-297. Washington, D.C.: March 14, 2002. Formula Grants: Effects of Adjusted Population Counts on Federal Funding to States. GAO/HEHS-99-69. Washington, D.C.: February 26, 1999. Formula Programs: Adjusted Census Data Would Redistribute Small Percentage of Funds to States. GAO/GGD-92-12. Washington, D.C.: November 7, 1991. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The decennial census is a constitutionally-mandated activity that produces critical data used to apportion congressional seats, redraw congressional districts, and allocate billions of dollars in federal assistance. This testimony discusses (1) the various measures of population used to allocate federal grant funds (2) how the accuracy of the population count and measurement of accuracy have evolved and the U.S. Census Bureau's (Bureau) plan for coverage measurement in 2010; and (3) the potential impact that differences in population estimates can have on the allocation of grant funds. This testimony is based primarily on GAO's issued work in which it evaluated the sensitivity of grant formulas to population estimates. In fiscal year 2000, GAO found that 85 percent of federal government obligations in grants to state and local governments were distributed on the basis of formulas that use data such as state population and personal income. The decennial census is the foundation for measuring the nation's population. It provides a count of the population every 10 years, and is the starting point for estimates of population made in years between the censuses. Obtaining an accurate population count through the decennial census has been a concern since the first census in 1790. Concern that the decennial census undercounted the population has continued since then. To measure accuracy, the Bureau since 1940 has used demographic analysis, in which it compares census counts with information on births, deaths, and other information. With the exception of 1990, the Bureau's demographic analysis shows that the extent to which the census undercounted the population has declined. More recently, the Bureau has used statistical techniques in which it compares the census count with the results of an independent sample survey of the population. For 2010, the Bureau plans to use similar statistical techniques to measure the accuracy and coverage of the census. Evaluating the accuracy of the census is essential given the importance of the data, the need to know the nature of any errors, and the cost of the census overall. GAO's prior work has illustrated that the accuracy of state and local population estimates may have some effect on the allocation of grant funds. Specifically, to show the sensitivity of grant programs to alternative population estimates, GAO simulated how two grant program formulas would allocate federal funds to states if population estimates were substituted for census counts. This simulation was done for illustrative purposes only. While only actual census numbers should be used for official purposes, this simulation showed some shifting of grant funds among the states when estimates were used. For example, recalculating allocations of Social Services Block Grant funds using estimates of population for 2000, rather than the census count, would result in shifting $4.2 million--or 0.25 percent--of $1.7 billion in fiscal year 2004 funds. Specifically, 27 states and the District of Columbia would have gained $4.2 million and 23 states would have lost a total of $4.2 million.
Six organizational entities within NOAA have responsibilities related to its ocean, coastal, and Great Lakes observing systems. NOAA has six line offices, which are responsible for executing the agency’s broad mission and programs. Five of those line offices operate and maintain observing systems. Staff offices support the line offices in achieving their missions and one staff office—the Office of Marine and Aviation Operations—also operates and maintains ocean, coastal, and Great Lakes observing systems. The missions of the six offices with responsibilities related to ocean, coastal, and Great Lakes observing systems are as follows: National Environmental Satellite, Data, and Information Service: Provides timely access to global environmental data from satellites and other sources to promote, protect, and enhance the nation's economy, security, environment, and quality of life. National Marine Fisheries Service: Promotes stewardship of living marine resources through science-based conservation and management and the promotion of healthy ecosystems. National Ocean Service: Provides science-based solutions through collaborative partnerships to address evolving economic, environmental, and social pressures on the nation’s oceans and coasts. National Weather Service: Provides weather, water, and climate data, forecasts, and warnings for the protection of life and property and enhancement of the national economy. Office of Oceanic and Atmospheric Research: Provides the research foundation for understanding the complex systems that support the planet. Office of Marine and Aviation Operations: Delivers effective earth observation capabilities, integrates emerging technologies, and provides a specialized, flexible, and reliable team responsive to NOAA and the nation. This office manages, maintains, and operates NOAA’s fleet of ships and aircraft, which the line offices use to gather data they need to help achieve their missions. NOAA is the lead federal agency responsible for implementing the Integrated Coastal and Ocean Observation System Act of 2009 and has established the U.S. Integrated Ocean Observing System (IOOS) program office. This office is part of NOAA’s National Ocean Service and works with 18 federal agencies and 11 regional associations to expand, standardize, and integrate ocean observing systems and data. The U.S. IOOS program relies on the voluntary participation of its federal and regional partners to achieve its coordination objectives, which have focused primarily on increasing the data compatibility and integration among the observing systems owned by NOAA, other federal agencies, and regional partners. According to NOAA officials, the program does not manage any of these systems, including those ocean, coastal, and Great Lakes observing systems operated by NOAA. On the basis of our review of NOAA documents and discussions with agency officials, there are several ways in which integration could take place in the context of an observing systems portfolio. One way is by changing how a portfolio of observing systems is managed. This type of management integration could occur to different degrees along a continuum. At one end of the continuum, each individual observing system would be operated and managed separately by its program manager, with little or no higher-level organizational oversight. This would result in little to no integration because decisions about operating and maintaining each individual system would be made in isolation from each other. At the other end of the continuum, all the observing systems would be managed centrally, rather than at the individual program level. In this scenario, top-level managers would make decisions about operating and maintaining the organization’s portfolio of systems by considering trade-offs among all of the systems the organization manages. Between the two extremes of individual and central management, a number of other management approaches would offer different degrees of integration. For example, several observing systems within the portfolio could be grouped together and managed by a small number of organizational units or individual programs could manage their systems but also have some sort of oversight from higher levels in the organization. Another way that integration can occur is for data collected by observing systems to be integrated. For example, using a standardized data collection format and quality control protocols increases the comparability of data obtained from different observing systems. This would allow data from multiple systems to be combined more efficiently for analysis and to produce products, such as weather forecasts. Integration can also occur by combining physical components (hardware) of various observing systems, for example, by placing additional sensors on an existing platform to collect data on different environmental parameters. NOAA recognized the need to begin taking steps toward integrating its observing systems portfolio in 2002, when the Under Secretary of Commerce for Oceans and Atmosphere initiated a review to examine NOAA’s strengths and identify opportunities for improvement. Historically, most of NOAA’s observing systems were designed individually to meet specific data collection needs. For example, the Marine Optical Buoy observing system was designed to collect data on an environmental parameter that is used by multiple satellites to validate their ocean color imagery data. Most observing systems also used different data collection formats, which historically made it difficult to combine and use data from different systems. Management of NOAA’s observing systems portfolio was decentralized and, according to NOAA documents, the agency considered its systems to be “stovepiped.” The 2002 review generated many recommendations, one of which called for NOAA to centrally plan and integrate all observing systems. The report did not specify what central planning and integration of NOAA’s observing systems portfolio would look like or how the agency would accomplish these goals. NOAA officials we spoke with described this review as the catalyst for the actions the agency has taken since 2002 to address observing systems integration issues. We identified 41 ocean, coastal, and Great Lakes observing systems at NOAA. The Office of Oceanic and Atmospheric Research manages 14 of NOAA’s ocean, coastal, and Great Lakes observing systems and the National Ocean Service manages 11 observing systems. Management of the remaining 16 systems is split between four other NOAA offices. Table 1 shows the number of ocean, coastal, and Great Lakes observing systems each NOAA office manages. See appendix III for an alphabetized list of the entire portfolio of NOAA’s ocean, coastal, and Great Lakes observing systems we identified and appendix IV for a list and descriptions of the systems organized by the office that manages them. The majority of NOAA’s ocean, coastal, and Great Lakes observing systems use one of three types of platforms—buoys, ships, or satellites—to collect data on environmental parameters. Buoys are used by 18 of the agency’s ocean, coastal, and Great Lakes observing systems. For example, the Chesapeake Bay Interpretive Buoy System consists of 11 buoys located in the Chesapeake Bay that collect meteorological, oceanographic, and water- quality data used to help protect and restore the area. (See fig. 1 for a map of the buoy’s locations in the Chesapeake Bay.) Five of the observing systems use a combination of NOAA ships or chartered vessels to collect data. The National Marine Fisheries Service’s Fish Surveys, for example, are conducted from ships and collect data on the distribution and abundance of commercially-targeted and ecologically-important fish species. Four of the observing systems use satellites as their primary platform. For example, the Jason Ocean Surface Topography Mission satellite collects data for use in ocean models to predict severe storm intensity. Other ocean and coastal observing systems also use satellites to transmit their data to land-based data centers. Thirty-three of the 41 ocean, coastal, and Great Lakes observing system platforms are located in situ—meaning situated where the data are measured. For these systems, that means in the water. The other systems are located remotely (either on land or in the atmosphere) and look down at the environment they are measuring. Some in situ systems are on a fixed platform, such as a moored buoy. These types of platforms are used to obtain a series of measurements over a long time at the same location. For example, the National Weather Service’s Tropical Atmospheric Ocean buoy array was designed to study and predict climate variations due to the El Niño Southern Oscillation on a year-to-year basis. This moored buoy system is located in the equatorial oceans and it collects data on several environmental parameters, such as air temperature and sea surface temperature, as shown in figure 2. The data are transmitted via satellite to NOAA and are used to assist in monitoring, predicting, and understanding El Niño and La Niña events. In contrast, other in situ systems use mobile platforms that measure how environmental parameters vary spatially, temporally, and geographically. For example, the Global Ocean Observing System Argo Profiling Floats use a free-drifting buoy system to collect data on the heat content and salinity of the upper ocean over a predetermined depth (up to 2,000 meters) and cycle time (10 days) as shown in figure 3. Data collected by an Argo float is available within hours of collection, which allows for continuous monitoring of the state of the ocean and provides data for other scientific uses, such as weather forecasting and climate modeling, according to a NOAA official. The agency’s ocean, coastal, and Great Lakes observing systems gather a broad range of data that NOAA uses to create a variety of products. In some cases, NOAA prepares products directly from data from an individual observing system and, in other cases, the agency prepares products by combining and analyzing data from a number of observing systems. Some of NOAA’s products include the following: Forecasts and warnings. NOAA provides weather, water, and climate forecasts and warnings for the nation, its territories, adjacent waters, and ocean areas that are used by the private and public sectors. The forecasts and warnings are derived from weather prediction models that use the data collected by the ocean, coastal, and Great Lakes observing systems. For example, the Coastal Weather Buoys observing system measures barometric pressure, wind direction, and air and sea temperature, and these data are used in weather prediction models to create forecasts. In addition, data from aircraft and satellites that observe ocean environmental parameters are used to develop severe storm and flash flood warnings. Scientific research. Data collected by the agency’s ocean, coastal, and Great Lakes observing systems are used to support NOAA’s research projects and activities. For example, the Ecosystems and Fisheries- Oceanography Coordinated Investigations observing system was established in 1984 and collects data on ecosystem changes in the Gulf of Alaska, Bering Sea, and Arctic Ocean. Scientists use the data to determine how biological and physical environmental trends, such as the loss of sea ice in the Bering Sea, are affecting Alaska’s marine ecosystems. In addition, the observing systems that comprise the Global Ocean Observing Systems collect long-term measurements of environmental parameters such as sea surface temperature and ocean current speed that can be used to help assess climate change. Navigation tools. Navigation tools are some of NOAA’s most important products, according to NOAA documents, because they help ensure the safe navigation of ports and harbors. Data collected by some of NOAA’s ocean, coastal, and Great Lakes observing systems are used to generate nautical charts. For example, the National Ocean-Shoreline observing system surveys the nation’s shorelines, and the Hydrographic Surveying observing system measures, among other things, the depth between the sea’s surface and the sea floor and the locations of potentially hazardous obstructions. Data from both of these systems are used to create nautical charts of coastlines and ports that are used by ships engaged in maritime commerce. These charts are also used for other activities, such as port and harbor maintenance. Emergency management and response. NOAA uses its ocean, coastal, and Great Lakes observing systems to provide data and information that the agency uses in its emergency response and management efforts. For example, the National Ocean Service’s National Water Level Observation Network observing system collects data on water levels and currents that are used to develop plans to contain oil spills. In addition, the Office of Marine and Aviation Operations uses its ships and aircraft to collect data on severe weather events, such as hurricanes, and in federal disaster response efforts. NOAA estimates it spent approximately $430 million annually on average to operate and maintain its ocean, coastal, and Great Lakes observing systems in fiscal years 2012 through 2014. That amount is about 9 percent of NOAA’s total annual appropriations for these years. NOAA provided us with estimated cost information because its budget structure and accounting system are not designed to capture costs at the observing system level. Of the 41 ocean, coastal, and Great Lakes observing systems, 4 have line items in NOAA’s budget that identify the amount of money dedicated to operating and maintaining those systems. Funding for the other observing systems comes from the budgets of various NOAA programs that cover multiple program activities, including the observing systems. For example, the Sustained Ocean Observations and Monitoring program in the Office of Oceanic and Atmospheric Research provides funding for a variety of activities, including operating several observing systems, such as the Global Ocean Observing System Argo Profiling Floats, Ocean Reference Stations, and Ocean Carbon Networks, that are key components of the Global Ocean Observing System. All of the activities and observing systems within the Sustained Ocean Observations and Monitoring program are funded through a single line item in the NOAA budget, which does not identify specific amounts for each of the observing systems. According to NOAA’s estimated cost information, the agency’s annual operations and maintenance costs ranged from about $22 million at the National Marine Fisheries Service to $198 million at the Office of Marine and Aviation Operations in fiscal year 2014 (see table 2). The two line offices with the highest reported annual costs for their observing systems for fiscal years 2012 through 2014 were the Office of Marine and Aviation Operations and the National Ocean Service. For the Office of Marine and Aviation Operations, these costs include operation and maintenance of NOAA’s fleet of specialized ships and aircraft, including the scientific and technical equipment they carry to collect data, according to NOAA documentation. For example, in fiscal year 2013, NOAA’s fleet included 16 active ships that provided 1,702 days at sea and 9 aircraft that provided 2,503 flight hours to programs in each of NOAA’s line offices to support observational activities needed to achieve their environmental and scientific missions. The National Ocean Service’s costs include operating 11 ocean, coastal, and Great Lakes observing systems that collect data NOAA describes as being essential to safe, efficient, and sustainable uses of busy coastal areas and waterways. For example, some of these costs were for the Hydrographic Surveying observing system that provides data that are used primarily to develop nautical charts. The costs also included operating and maintaining the National Water Level Observation Network and the National Current Observation Program observing systems that monitor tides, currents, water levels, and other environmental parameters. The data from these systems are used to create navigational products and provide other services. NOAA’s estimated annual costs vary widely across the different observing systems. In fiscal year 2014, NOAA’s costs ranged from nearly $170 million to operate the NOAA-owned ships that collect ocean and fisheries-related data, to $80,000 for the Ocean Acoustic Monitoring System. That system consists of mobile underwater hydrophones in the Pacific Ocean that are primarily used to listen for earthquakes, but they can also be used for observing some endangered marine species. The 10 systems with the highest estimated annual costs, as shown in table 3, together accounted for approximately 79 percent of NOAA’s annual costs to operate and maintain its ocean, coastal, and Great Lakes observing systems in fiscal year 2014. The largest item in the annual operations and maintenance costs for the 10 systems varies depending on the type of observing system. For example, most of the costs for the Global Ocean Observing System Argo Profiling Float system are for the acquisition of new floats since deployed floats are not retrieved for maintenance. But, for the Coastal Weather Buoys, the major cost is labor to maintain the systems’ stations, sensors, and instruments. For other systems, such as Fish Surveys, ship time accounts for a majority of the annual operations costs. See appendix V for the annual costs for each of NOAA’s 41 ocean, coastal, and Great Lakes observing systems for fiscal years 2012 to 2014. NOAA has not developed a plan for achieving an integrated observing system nor has it assessed whether there is unnecessary duplication in its observing systems. NOAA has created an observing systems council to provide a more centralized perspective on observing systems management and is working to obtain the capability to conduct analyses to help understand how to make its portfolio more cost-effective. However, NOAA has not developed a methodology to consistently capture accurate observing systems cost information for use in these analyses. In a variety of plans and reports NOAA has identified the need to move toward an integrated observing systems portfolio. For example: Strategic Plan for Systems Integration. This 2004 plan states that NOAA “will manage our processes on a corporate-wide basis to include standardizing processes and practices at the enterprise level, moving away from the current practice of managing at the system level. We will design and plan, engineer and develop, and control and manage at the enterprise level as we move away from stovepipe systems and programs.” Buoy Recapitalization Strategic Plan. This 2009 plan, which focused on 19 of NOAA’s in situ buoy ocean observing systems, found that allowing individual programs within NOAA’s line offices to make portfolio management and funding decisions has created “an ever-increasing burden on NOAA to sustain a growing number of established systems, while continuing to develop new and innovative ones. In addition, the systems in the pipeline may not be the ones that NOAA deems most critical to the achievement of its future strategy.” 2012 Implementation Plan. NOAA’s 2012 implementation plan for its objective to produce accurate observation data identifies the need for NOAA to integrate the planning, operation, and data management of its observing systems. NOAA Science Advisory Board Report. An April 2013 report from NOAA’s Science Advisory Board found that there was room for improvement—both in effectiveness and cost-efficiency—for NOAA observing systems. The report said that “given the need to protect and sustain resilient coastal communities, the absence of an integrated coastal observation system is a matter of particular concern.” Our previous work has found that, in developing new initiatives, federal agencies can benefit from following leading practices for strategic planning. Taking steps toward managing NOAA’s observing systems as an integrated portfolio is a significant initiative for NOAA. The Government Performance and Results Act of 1993 (GPRA), as amended by the GPRA Modernization Act of 2010, was enacted to improve the efficiency and accountability of federal programs, among other purposes.The act, as amended, requires, among other things, that federal agencies develop long-term strategic plans that include agency-wide goals and strategies for achieving those goals. We have reported that these requirements also can serve as leading practices at lower levels within federal agencies, such as at NOAA, to assist with planning for individual programs or initiatives. Taken together, the strategic planning elements established under the act and associated Office of Management and Budget guidance and practices we have identified provide a framework of leading practices in federal strategic planning. These practices include defining a program’s or initiative’s goals, defining strategies and identifying the resources needed to achieve the goals, and developing and using performance measures to track progress in achieving them. NOAA has not, however, developed a plan that sets forth a clear vision of (1) what it wants its integrated portfolio of observing systems to look like and how it will be managed, (2) its strategy for taking the steps necessary to move toward this target systems architecture and management approach, or (3) how to measure progress toward the goal of an integrated observing systems portfolio. A NOAA official told us that the 2004 strategic plan for systems integration is not being used to guide its current systems integration efforts, and no other integration plan exists. One of the actions identified in the 2004 strategic plan was the development of a NOAA observing systems architecture master plan. This plan would “allow NOAA leaders to determine which future observing and data management systems NOAA needs to meet our users’ current and evolving environmental information requirements.” According to a NOAA official, this plan was never developed. One of the long-term outcomes identified by the 2012 implementation plan, to be accomplished between fiscal years 2014 and 2018, is the development of a plan for an integrated observing system portfolio that meets the full range of needs of NOAA’s strategic objectives. A NOAA official also told us that they are not working on developing this observing systems integration plan. Instead, the NOAA official said the agency has focused on taking tangible actions. For example, the agency established an observing systems council to provide a more centralized perspective on observing systems management. However, without a plan describing what NOAA’s integrated observing systems portfolio should look like and how it will be managed, a strategy for moving toward this target architecture and management approach, and performance measures related to systems integration, NOAA cannot be assured that it has established a framework to effectively guide and assess the success of its observing systems integration efforts and for its stakeholders to track the agency’s efforts and hold it accountable. In addition, without a detailed plan and performance measures, NOAA could waste resources, time, and effort in a constrained budget environment, pursuing activities that may not prove effective in creating an integrated observing systems portfolio. Since 2010, some NOAA planning documents have identified the need to reduce systems costs by eliminating unnecessary duplication. For example, one of the agency-wide objectives in NOAA’s 2010 strategic plan was to collect accurate and reliable data through a sustained and integrated observing system. The plan said that pursuing this objective would include reducing the costs of observations through, among other things, “reducing unnecessarily duplicative capabilities.” Similarly, NOAA’s 2012 implementation plan for its objective to produce accurate observation data included as a short-term outcome “educed, consolidated, and/or closed observing sites and sensors based on quality and utility of observations supporting all NOAA needs.” NOAA officials did not, however, provide documentation of any observing sites that have been reduced, consolidated, or closed since the agency developed the 2012 implementation plan even though these outcomes were to be accomplished in fiscal years 2012 or 2013. Although NOAA documents have indicated a need for the agency to reduce unnecessary duplication in observing systems capabilities, NOAA officials we spoke with said they were not aware of any duplication in the geographic distribution of NOAA’s ocean, coastal, and Great Lakes observing systems or of unnecessarily duplicative data being collected. One official said that given the vastness of the ocean environment, NOAA’s observing systems were more likely under-sampling than over-sampling. In addition, officials we spoke with said the agency had not identified any unnecessary duplication in the data collected by its observing systems portfolio. NOAA officials could not, however, provide documentation of any analyses that NOAA has conducted to support its conclusion that unnecessary duplication does not exist in the data collected by its ocean, coastal, and Great Lakes observing systems. NOAA officials said the agency’s existing analytical tools have limited ability to determine whether there is duplication in the data collected by its observing systems. In our analysis of the 75 environmental parameters measured by NOAA’s ocean, coastal, and Great Lakes observing systems, we identified several parameters that are measured by multiple observing systems, suggesting the potential for unnecessary duplication in the data being collected. For example, as shown in table 4, 21 observing systems, including at least 1 system operated by each of NOAA’s line offices, currently collect data on sea surface temperature. Not all of these systems may need to collect sea surface temperature data to meet the needs of NOAA’s programs. However, according to NOAA officials, there are a variety of reasons why multiple observing systems might measure the same parameters. First, the systems may all be collecting data, such as sea surface temperature, in different locations. Second, in some situations, collecting data on the same environmental parameter is done purposefully to maintain continuity of data collection in the event that one system failed. Third, different observing systems may collect data on the same environmental parameter but at different times or with different degrees of accuracy. While one or more of these reasons, or some other reasons, may be why 21 observing systems collect data on sea surface temperature, NOAA officials could not provide analysis or documentation to show that unnecessary duplication does not exist. NOAA officials told us they do not believe unnecessary duplication in data collection in the agency’s observing systems portfolio is a significant problem requiring further analysis. However, without analyzing whether there is unnecessary duplication or opportunities to reduce or consolidate observations, NOAA would not know if there were opportunities to achieve cost savings. NOAA’s 2002 agency-wide review recommended the development of a “cross-cut team” to centrally plan and integrate the management of its observing system portfolio. In response to the recommendation, the agency created the NOAA Observing System Council (NOSC), which held its first meeting in July 2003. The NOSC consists of representatives from each of the six line offices, the Office of Marine and Aviation Operations, the Chief Financial Officer, and the Chief Information Officer. The Assistant Secretary of Commerce for Environmental Observation and Prediction chairs the NOSC, with support provided by three vice chairs. The purpose of the NOSC is to provide a more centralized, agency-wide perspective on the management of NOAA’s observing systems. However, according to NOAA officials, individual programs within NOAA’s line offices are still responsible for operating and managing their observing systems. According to NOAA officials and documents, the NOSC coordinates all of the agency’s observing systems portfolio and data management activities and provides recommendations to the NOAA Executive Council on observing system investments. For example, in 2009, the NOSC appointed a team, with representation from each line and one staff office, to review analyses of 25 alternatives related to NOAA’s observing system portfolio. The team recommended whether the various options should or should not be funded. For example, the team recommended funding to expand annual fish surveys that provide data for fish stock assessments conducted by the National Marine Fisheries Service, in part, because the assessments were both a line office and agency priority. According to NOAA documentation, NOAA’s Program Analysis and Evaluation Office planned to use the team’s recommendations as part of the agency’s planning and budgeting process for fiscal years 2013 through 2017. The NOSC, in 2004, established the Technology, Planning and Integration for Observation (TPIO) program office to assist in conducting technical analyses to support the NOSC’s recommendations to NOAA leadership on integrating and improving the cost-effectiveness of the agency’s observing system portfolio. According to NOAA documentation, TPIO identifies: (1) NOAA’s observation requirements, (2) NOAA’s observing system and data management capabilities, (3) gaps between observation requirements and capabilities, and (4) observing system and data management solutions to fulfill NOAA’s observational requirements. For example, in 2011, TPIO conducted an analysis of observational needs for six of NOAA’s high priority program areas, including fisheries management and tide and current data. Specifically, the analysis examined the effect to the programs if NOAA no longer collected data on key observational requirements. For example, TPIO’s analysis found that four observing systems collect data on key requirements related to tides and currents and, if these systems were no longer funded, key data on these requirements would no longer be available. The purpose of the analysis was to provide an agency-wide perspective to NOAA leadership on which high-priority programs were critical to fund in an increasingly constrained fiscal environment. NOAA officials told us this particular analysis was only conducted once as the agency decided to invest in other analytical tools to assist with its decision making. In 2009, the NOSC formed a subcommittee—the Observing Systems Committee—consisting of representatives from NOAA’s line and staff offices. According to NOAA documentation, the Observing Systems Committee’s purpose is to conduct analyses of the current observing systems portfolio and provide recommendations to the NOSC for changes in the configuration of the portfolio to maximize its benefits. For example, according to NOAA officials, one of the Observing Systems Committee’s first activities was to develop the criteria to identify NOAA’s “observing systems- of-record.” Observing systems-of-record are those systems that the agency deems necessary to meet its mission and receive sustained funding. As of 2014, the Observing Systems Committee has identified 108 observing systems of record, which includes nearly all of the 41 ocean, coastal, and Great Lakes systems we identified. The NOSC also established a second subcommittee in 2009—the Environmental Data Management Committee—which includes members from each line office and the Office of Marine and Aviation Operations. This subcommittee (1) coordinates the development of NOAA’s data management strategy related to its observing systems, (2) provides guidance to promote consistent implementation across NOAA, and (3) identifies opportunities to improve the usability of its data. For example, in response to a recommendation from NOAA’s Science Advisory Board, in 2013 the committee developed an agency-wide environmental data management framework that defines the policies, requirements, activities, and technical considerations relevant to the management of NOAA’s observational data and products. To enhance NOAA’s ability to understand and make cost-effective management decisions for its entire observing system portfolio, of which the ocean, coastal, and Great Lakes observing systems are a part, the agency has developed some analytical tools: NOAA Observing System Architecture database. The database, initially created in 2003, provides a comprehensive list of NOAA’s observing systems and their capabilities. Currently, the NOAA Observing System Architecture database includes documentation on more than 200 observing systems that are operational, planned, in development, used for research purposes, retired, or canceled. About half of the systems are associated with other federal agencies, states and localities, the commercial sector, or foreign countries. NOAA includes these systems in the database because they are deemed important to the agency’s mission, according to NOAA officials. TPIO is responsible for updating and managing the database and uses it to analyze NOAA’s observing system capabilities. In 2008 and 2009, TPIO used the database to help prepare analyses to support observing system investment decisions. For example, TPIO examined whether to fund additional aerial data collection for shoreline and coastal areas for use in nautical maps. Consolidated Observation Requirements List. The NOSC developed this database in 2003 to create a more formalized process for NOAA to identify, collect, document, and update its observational needs and requirements. After a line office has identified the environmental parameters that it believes need to be measured, it assigns each one a priority level. TPIO and the line office then initiate a validation process for those observing requirements identified as priority-1. According to NOAA documentation, the purpose of the validation process is to confirm that a program needs to observe a specific environmental parameter. After the NOSC reviews and concurs with the validated observation requirement, it is added to the database. NOAA officials told us the database now contains information on more than 1,000 validated observation requirements. NOAA officials said the agency still needs to document the observing requirements for about 15 percent of the agency’s ocean and coastal programs. According to NOAA officials, they use the Consolidated Observation Requirements List and the NOAA Observing System Architecture databases as analytical tools to help focus their investment decisions on high-priority observation requirements. For example, TPIO can compare observational capabilities listed in the system architecture database with observational requirements listed in the requirements list database to identify gaps between capabilities and requirements. NOAA Observing Systems Integrated Analysis. The NOAA Observing Systems Integrated Analysis model is a tool intended to help compare the cost-effectiveness of the agency’s observing systems in obtaining mission-critical data and to inform investment decisions. Two external reviews conducted in 2010 at the request of NOAA found that the agency had developed tools such as the NOAA Observing Systems Architecture and Consolidated Observation Requirements List databases, but there was no mechanism to evaluate the observing systems portfolio agency-wide. As a result, NOAA developed a pilot integrated analysis model, known as NOSIA-I, which was completed in December 2011. The NOSC, in March 2012, decided to expand the pilot model so it could analyze the agency’s entire observing systems portfolio including operational observing systems and satellites under development. As of September 2014, the expanded model (NOSIA-II) is still in development but, according to NOAA officials, it is expected to be fully operational by August 2015. The expanded model includes information on NOAA’s strategic goals and objectives, key products, qualitative performance ratings of the observing systems, and their costs. According to NOAA officials, the cost information is important because it will be used in analyses to help inform management decisions on observing system investments. For example, NOAA could use it to show different ways that the agency could absorb a 5 percent budget cut for its observing systems and how best to allocate those cuts to still allow the agency to meet its mission. However, the two external reviews conducted in 2010 identified concerns with the quality of the cost information NOAA collected on its observing systems. Specifically: One review found that NOAA does not have guidance on how to capture costs associated with its observing systems and has limited documentation to support its reported observing systems costs. It also found a major weakness in the apparent lack of any formal, documented process for preparing and reporting observing system costs. Based on their assessment, the reviewers were concerned that there was a low level of consistency in observing system cost information across line offices. The other review similarly found (1) accurate cost data for NOAA’s observing systems appeared very difficult to determine due to accounting differences and no standard process for how to record and report the data within programs and (2) a lack of consistency in what programs included in their observing system costs. The review concluded that more work was needed to improve the quality and accounting of costs for NOAA’s observing systems. NOAA documents have also recognized the need for accurate cost information for its observing systems in order to assess the cost- effectiveness of its portfolio. For example, NOAA’s 2012 implementation plan for its objective to collect accurate observation data states that “it is critical that NOAA have the capability to determine the optimum portfolio of observing systems that enable it to accomplish its mission in the most cost- effective, efficient, and economic manner possible.” The implementation plan further stated that it is critical that NOAA’s Chief Financial Officer develop a standardized methodology for obtaining accurate cost information to support investment analyses and to improve the cost-effectiveness of NOAA’s observing systems portfolio. Specifically, according to NOAA documents, it is important that the NOAA Observing Systems Integrated Analysis model include accurate, consistent, and up-to-date cost information on all of NOAA’s individual observing systems. Without reliable, consistent cost data, comparisons of the cost-effectiveness of various systems would not be accurate. However, in a 2013 presentation on the status of the development of the model, TPIO noted that observing system cost data needs to be improved and is likely the weakest component of the model. NOAA has taken limited steps to improve the quality of these data. Officials from the Office of the Chief Financial Officer told us they were not working on developing the methodology identified in NOAA’s 2012 implementation plan. The officials told us they do not have the technical expertise and knowledge of the operational requirements of the observing systems to develop this methodology, and suggested it should be done through the NOSC. TPIO officials told us they do not have in-house expertise in this area, and that is why the implementation plan delegated this responsibility to the NOAA Chief Financial Officer. TPIO officials said that earlier this year they had informal discussions with the staff from NOAA’s budget and finance offices to address the need for a standardized methodology for estimating the costs of its observing systems. According to NOAA officials, the NOAA Chief Financial Officer has since asked a committee to explore developing a better method for tracking observing system costs though no time frame for doing so has been established. Without accurate, consistent, and reliable cost information, the observing system integrated analysis model will not provide decision makers with the best information to make decisions regarding investment trade-offs and to improve the cost-effectiveness of NOAA’s observing systems portfolio. NOAA has identified a need to better integrate and improve the cost- effectiveness of its portfolio of observing systems, including ocean, coastal, and Great Lakes systems. The agency has taken some positive steps toward integrating its portfolio, such as creating an observing systems council to provide a more centralized, agency-wide perspective on the management of its observing systems and developing databases that catalogue the agency’s observing requirements and capabilities. The agency has not, however, developed a plan for systems integration that identifies what it wants the portfolio of observing systems to look like in the future and includes ways for NOAA to measure and track its progress toward its goals. Without such a plan, NOAA does not have a detailed, transparent framework to effectively guide and assess the success of its observing systems integration efforts, and it will be difficult for stakeholders to hold the agency accountable for meeting its integration goals. Also, because NOAA has not assessed whether there is unnecessary duplication in its observing systems portfolio, the agency may be missing opportunities to reduce duplication and achieve cost savings. Finally, NOAA does not have consistent, reliable information on the costs associated with operating and maintaining each of its observing systems. This will make it difficult for NOAA to use its observing systems integrated analysis model to produce accurate comparisons of the cost- effectiveness of its observing systems or provide decision makers with the best information for making informed investment decisions. To help strengthen the management and cost-effectiveness of NOAA’s observing systems portfolio, including ocean, coastal, and Great Lakes systems, we recommend that the Secretary of Commerce direct the NOAA Administrator to take the following three actions: Develop a plan for observing systems integration that includes a description of what an integrated portfolio of observing systems will include and achieve and how it will be managed, the steps necessary to move toward an integrated portfolio of observing systems, and how to measure progress toward the goal of an integrated observing systems portfolio. Analyze the extent to which unnecessary duplication exists in NOAA’s portfolio of observing systems. Develop a standardized methodology for the routine preparation and reporting of observing systems cost data. We provided a draft of this report to the Department of Commerce for comment. In its written comments, (reproduced in appendix VI), NOAA, providing comments on behalf of Commerce, generally agreed with our recommendations. In commenting on the recommendation that NOAA develop a plan for observing systems integration, NOAA’s response focused on the 41 ocean, coastal, and Great Lakes observing systems identified in the report and acknowledged there is no single observing plan for these systems. NOAA listed existing observing plans for several of its ocean, coastal, and Great Lakes observing systems and said it plans to build off of these and other existing documents to explore the feasibility of enacting this recommendation. From NOAA’s response, it appears the agency believes that our recommendation was directed only at its 41 ocean, coastal, and Great Lakes observing systems. This is not the case. NOAA does not manage the 41 ocean, coastal, and Great Lakes observing systems as a separate portfolio and we are not recommending that they do so. Rather, we recommended that NOAA develop a plan for integrating the agency’s entire observing systems portfolio, including the ocean, coastal, and Great Lakes observing systems. In response to the recommendation that NOAA analyze the extent to which unnecessary duplication exists in NOAA’s portfolio of observing systems, NOAA acknowledged a continued need to do so and said it has taken steps in this regard with the development of the NOAA Observing Systems Integrated Analysis model. In response to our recommendation that NOAA develop a standardized methodology for the routine preparation and reporting of observing systems cost data, NOAA said that it agrees with this recommendation and has already begun talks to address how its accounting system could collect and report cost data for all observing systems of record. NOAA also provided three general comments. First, NOAA said that the title of the report should refer to NOAA’s ocean, coastal, and Great Lakes observing systems, since that was the original scope of our study. While two of the report’s objectives address only ocean, coastal, and Great Lakes observing systems, the third objective examines the extent to which NOAA has taken steps to integrate and improve the cost-effectiveness of its portfolio of observing systems, including ocean, coastal, and Great Lakes systems. Our recommendations address the third objective and consequently we believe the report title is both accurate and appropriate. Second, NOAA said that it has long seen the need for an integrated and cost-effective observing systems portfolio, which is vital to maximizing the benefits of ocean, coastal, and Great Lakes information for the nation. NOAA said it has dedicated a large effort over the past decade to systematically develop management structures as well as tools to address these issues. We agree that NOAA has developed management structures and tools and the report includes examples of both. Third, NOAA said that the fact that key parameters, such as temperature, are measured by multiple observing systems is not in itself an indicator of potential duplication. We agree. However, our report noted that multiple systems measuring the same parameter suggested the potential for unnecessary duplication and we recommended that NOAA analyze the extent to which unnecessary duplication exists in its portfolio of observing systems. NOAA also provided technical comments that we incorporated into the report as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, the NOAA Administrator, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This report (1) identifies and describes the ocean, coastal, and Great Lakes observing systems the National Oceanic and Atmospheric Administration’s (NOAA) operates, (2) identifies the annual operations and maintenance costs of these systems for fiscal years 2012 through 2014, and (3) examines the extent to which NOAA has taken steps to integrate and improve the cost- effectiveness of its portfolio of observing systems, including ocean, coastal, and Great Lakes systems. To identify which of NOAA’s observing systems collect data on oceans, coasts, and the Great Lakes, we first identified a list of environmental variables, known as “parameters” related to the ocean, coast, and Great Lakes. After consulting with NOAA officials, we determined that National Aeronautics and Space Administration’s Global Change Master Directory contained the federal government’s most complete list of these environmental parameters. The directory is a Web-based catalogue that we accessed in October 2013 to identify the 74 environmental parameters it contained related to the oceans, coasts, and Great Lakes. For example, the environmental parameters we identified included wave direction, sea level, salinity, and ocean temperature. In addition, we also identified another environmental parameter, stock assessment, based on NOAA documentation and interviews with NOAA officials. Our final number of environmental parameters related to NOAA’s ocean, coastal, and Great Lakes observing systems totaled 75. See appendix II for a list of the 75 environmental parameters we identified. We then reviewed all of the systems that make up NOAA’s observing systems portfolio, as captured in its observing system architecture database, to identify which ones collect data on at least one of the 75 environmental parameters. This review identified 47 ocean, coastal, and Great Lakes observing systems. To obtain our final list of 41 systems, we excluded observing systems that, according to NOAA documentation, are not yet deployed or collect data for short-term, limited-scope research experiments. If NOAA’s documentation was unclear for a specific observing system, we spoke with the NOAA officials responsible for the system to determine whether the system was operational. The observing systems we excluded were (1) Joint Polar Satellite System, (2) Geostationary Operational Environmental Satellite I-P, (3) Marine Sound, (4) Jason Ocean Surface Topography Mission 3, (4) Autonomous Underwater Vehicles, (5) Unmanned Aerial System, and (6) Animal Borne Tagging and Bar Code. We provided the list of ocean, coastal, and Great Lakes observing systems we identified with NOAA officials for their review and comment and incorporated their views into our final list as appropriate. See appendix III for an alphabetized list of the 41 NOAA ocean, coastal, and Great Lakes observing systems we identified. To obtain descriptive information about the systems we identified, we reviewed agency documentation, including NOAA’s observing system summary reports, budget summaries, and agency reports and presentations. We also interviewed NOAA officials, such as the principal investigators for individual observing systems, program managers that oversee more than one observing system, or line office officials familiar with the observing systems operated and maintained by their offices. We provided our descriptions of NOAA’s ocean, coastal, and Great Lakes observing systems with NOAA officials for their review and comment and incorporated their views into our final descriptions as appropriate. See appendix IV for a list and descriptions of the observing systems organized by the office that manages them. To identify NOAA’s annual costs to operate and maintain its ocean, coastal, and Great Lakes observing systems, we asked NOAA officials to provide cost data for these systems. In response, NOAA officials explained that this information is not readily available as the agency does not routinely collect data on observing system operation and maintenance costs in the ordinary course of business. They indicated that the most recent and best available observing system cost data were collected in 2013 by its Technology, Planning and Integration for Observation (TPIO) office for use in NOAA’s Observing Systems Integrated Analysis model. To collect the data, TPIO developed a spreadsheet template that requested specific cost information in a particular format. TPIO sent the template to each line office and asked them to provide annual costs for their observing systems for fiscal years 2012 through 2014. TPIO officials told us they intended the cost information they requested from the line office to be actual expenditures for fiscal years 2012 and 2013 and estimates for costs in fiscal year 2014 based on amounts in the 2014 presidential budget request. However, they said the observing systems costs reported by the line offices for fiscal years 2012 and 2013 are estimates because NOAA’s budget structure and accounting systems are not set up to track actual spending at the observing system level. According to TPIO officials, the cost information provided by the line offices was not reviewed by TPIO or the Chief Financial Officers for each of the line offices prior to being entered into a database. After we requested the observing systems cost information TPIO had collected, NOAA’s Budget Office requested the data be reviewed to assess the accuracy of the information prior to releasing it to us. According to NOAA budget officials, they asked program managers to review the data they reported to TPIO for their respective observing systems and they asked each of the line office Chief Financial Officers to review the combined costs for the observing systems operated by their line offices. The purpose of the review was to verify that the cost information NOAA was providing to us accurately represented the information that had been reported to TPIO. The line office Chief Financial Officers also evaluated whether the costs were consistent with the program’s budget allocation based on NOAA’s appropriation levels for the fiscal years covered by the request. According to NOAA officials, the review resulted in mostly minor adjustments to the cost information originally collected by TPIO, and the information we received was the most accurate cost information NOAA has for its ocean, coastal, and Great Lakes observing systems portfolio. We also took steps to assess the reliability of NOAA’s observing system cost information by, among other things, reviewing documentation of NOAA’s data collection procedures and interviewing agency officials, line office Chief Financial Officers, and observing system program managers. We found the data to be sufficiently reliable for the purpose of our report, which is to provide a general sense of the costs for NOAA’s ocean, coastal, and Great Lakes observing systems. While we believe the cost information are sufficiently reliable for this purpose, they may not be sufficiently reliable for other purposes that require more accurate cost data, such as for making comparisons of the relative cost-effectiveness of different observing systems. In addition, to obtain other information related to observing system costs, we reviewed agency documents, including budget requests, guidance, and policies, and documentation of NOAA’s managerial cost accounting system. We also interviewed NOAA headquarters and line office officials about NOAA’s budget structure and cost accounting practices. See appendix V for the annual costs of each of NOAA’s 41 ocean, coastal, and Great Lakes observing systems for fiscal years 2012 to 2014. To determine the extent to which NOAA has taken steps to integrate and improve the cost-effectiveness of its portfolio of observing systems, including ocean, coastal, and Great Lakes systems, we reviewed agency documents and interviewed NOAA officials responsible for implementing the agency’s observing systems management activities. Specifically, we reviewed plans related to managing the agency’s observing systems portfolio, such as a 2004 strategic plan for system integration and plans related to implementing aspects of NOAA’s 2010 next generation strategic plan. In addition, we reviewed internal NOAA reports and external reviews that identified opportunities to improve the management and integration of the agency’s observing system portfolio. We interviewed officials in NOAA’s six line offices and in the one staff office that operates and maintains observing systems. In these interviews, we discussed the offices’ approaches to managing their observing systems, their budgeting processes, and NOAA’s efforts to integrate its observing systems portfolio. We also interviewed officials on NOAA’s Observing Systems Council about their efforts to create a more integrated, cost-effective observing systems portfolio and from TPIO about their development of analytical tools to support the agency’s observing system integration efforts. In addition, we reviewed GAO’s work on strategic planning and performance measurement to identify leading practices in these areas. We also reviewed Office of Management and Budget (OMB) guidance to identify leading practices in planning and management. We conducted this performance audit from September 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Identified Ocean and Coastal Environmental Parameters 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. Ambient Noise: Biological Ambient Noise: Total Atmospheric Pressure: Sea Level Bathymetry Buoy Support Carbon Dioxide: Partial Pressure Carbon Dioxide: Profiles Carbon Dioxide: Surface Carbon: Profiles Chlorophyll Concentration Conductivity: Profiles Conductivity: Surface Convection Coral Reef Assessment Diffuse Attenuation Coefficient Dissolved Gases: Oxygen Gravity Field: Airborne Gravity Field: Ground Based Hydrography: Bathymetry + Water Depth Ice Age Ice Concentration Ice Depth/Thickness Ice Extent Ice Motion: Direction Ice Motion: Speed Ice Origin Ice Temperature Ice Topography Marine Debris Removal Net Heat Flux Nitrate Particles: Profiles Nitrate Particles: Surface Nitrogen Oxides: Profiles Nitrogen Oxides: Surface Nutrients: Profiles Nutrients: Surface Ocean Color Ocean Color: Turbidity Ocean Contaminants Ocean Currents Subsurface 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. Global Ocean Observing System Tropical Atmosphere Ocean Array Integrated Ocean Observing System High Frequency Radars Jason Ocean Surface Topography Mission (2, 3 & CS) Initially deployed (fiscal year) This two-satellite system maintains a constant view of the earth from an orbit of about 22,000 miles in space and focuses primarily on the United States. The system provides timely environmental data about the earth’s atmospheric, cloud cover, and surface conditions. The system observes the development of hazardous weather conditions, such as hurricanes, and tracks their movement and intensity to protect life and property. The satellite sensors also provide the capability to detect ice fields and map the movements of sea and lake ice. The data are primarily used by meteorologists for weather observation, monitoring, and forecasting. The data also support improved atmospheric science research, weather prediction models, and environmental sensor design and development. The system measures sea surface height using a sensor mounted on a low-earth orbiting satellite. The data are used to model the ocean, forecast weather events such as El Niño and La Nina, and predict hurricane intensity. The system consists of a single moored buoy deployed off the coast of Hawaii. The system’s primary purpose is to measure visible and near-infrared radiation entering and emanating from the ocean. Measurements are taken at the sea surface and three deeper depths. The data collected by the system are used by satellite systems to adjust their sensors that measure ocean color. Because of their remote location, satellites may experience interference in their measurements of ocean color. This system provides data to correct this interference. The system uses polar-orbiting satellites to support environmental observations for imaging and measuring the earth’s atmosphere, surface, and cloud cover. The system measures three ocean and coastal environmental parameters—sea surface temperature, sea ice extent, and coral reef assessments. The data are processed to provide graphical weather images and specialized weather products. The data are also primary inputs into models used for developing weather forecasts up to 3 days in advance, and can be used to monitor environmental phenomena such as ozone depletion. The data can also be used in climate studies. System description The system has a joint mission to gather data to support long-term monitoring of climate trends and for weather forecasts. The satellite is equipped with five different sensors that collect environmental data such as ice thickness, ocean color, and sea surface temperature. Data uses and products The data are used in models to generate advance forecasts and warnings for severe weather. The data are also used for fisheries and coastal zone management and long-term monitoring of climate trends such as El Niño. The system is a network of buoys in the Chesapeake Bay that collect meteorological, oceanographic, and water-quality data. Data collected by the buoys are delivered to users in real-time via wireless technology. Users include scientists and students who use the data to help protect, restore, and manage the Chesapeake Bay. In addition, the data are used as a resource for boaters to alert them to boating conditions. The system is designed to monitor the health and status of living marine resources and their habitats in Alaska and New England. The goal of the observing system is to characterize the changing states of the ecosystems and forecast any subsequent impact on fisheries productivity. Data are collected primarily by research ships; however, satellites, buoys, and other methods are also used. The data are used for the assessment and management of fish species and in a wide variety of research programs. The system monitors the distribution and abundance of commercially-targeted and ecologically-important fish species. The surveys occur in all of the oceans surrounding the nation; however, the frequency of the surveys is contingent on available funding each year. The data are used in models by fishery managers to determine the effects of fishing on fish populations. Initially deployed (fiscal year) The system uses buoys to collect data to understand the condition of, and the processes influencing, the nation’s coral reef ecosystems. Since 2013, the system has been part of the National Coral Reef Monitoring Plan, with a goal of monitoring the status and trends of the country’s coral reefs. The data are used to make better informed and timely management decisions related to the conservation of coral reefs. For example, the data are used to model and forecast climate- related risks and vulnerabilities to coral reefs. The data are used to support a variety of products including nautical charts. The charts are then used for port and harbor maintenance, coastal engineering, coastal zone management, and resource development offshore. surveys of the physical parameters and waterbody features in the nation’s coastal areas. The areas covered by the surveys are prioritized based on a variety of factors, including the amount of time elapsed since an area was previously surveyed. The data include information on water depth and the nature of the sea floor, which has implications for anchoring, dredging, and fisheries habitat. This system measures the speed and direction of ocean surface currents in near real-time. The radars can measure currents up to 200 kilometers offshore and can operate under any weather conditions. The radars are located along both the east and west coastlines of the United States. The data from the radars are used in a variety of applications including to support pollutant tracking, search and rescue efforts, and management of the country’s coastal resources. The buoy system collects data on ocean velocity, salinity, and temperature to determine the circulation along the United State’s coasts and estuaries. The data are used to principally produce the National Oceanic and Atmospheric Administration’s (NOAA) Tidal Current Tables. These tables predict the daily tidal current for each location where the data are collected. The tables are publicly available in an electronic format. Data uses and products The data are used by many groups including researchers, educators, and coastal managers. For example, data from the network of reserves is used to help understand the effects of climate change in the nation’s coastal regions. System description The system is located in the nation’s major coastal regions— west, northeast, Great Lakes, mid- Atlantic, southeast, Gulf of Mexico, and Caribbean Sea—to determine how reserve conditions, including water quality associated environmental parameters, are changing in both the short-term and long-term. The system monitors environmental conditions, such as water quality, at 13 marine sanctuaries. The data are collected from a variety of platforms, including ships and buoys. The data collected are used to measure progress toward maintaining and improving the natural and archaeological quality of the national marine sanctuary system. The system consists of two programs—Mussel Watch and Bioeffects—that monitor the environmental quality of estuarine and coastal waters throughout the nation. Data are collected from multiple sites along the entire United States coastline, including the Great Lakes. The data are used to characterize and assess the environmental impact of new and emerging contaminants and extreme events (hurricanes and oil spills) on the nation’s estuarine and coastal waters. The system is a network that has over 200 coastal observing stations around the United States that collect continuous, long-term water level observations. The data are used primarily to support safe navigation of the country’s waterways by computing tidal and water-level datums, producing tide prediction tables, and estimating sea-level trends. The data are collected and transmitted via satellite to their users. The system uses aircraft to collect aerial imagery of the nation’s coastal areas. The data are primarily used to produce nautical charts, which are used for a variety of purposes including coastal zone management and emergency response. System description The system was developed to provide accurate and reliable real- time information about environmental conditions in seaports. It collects data on wind, water, and air environmental parameters. Data uses and products The data are provided primarily to ship masters and pilots in real- time to avoid groundings and collisions in the nation’s ports. Data on water levels, currents, and other oceanographic variables are available through a variety of formats, including the Internet. The system is a network of observing instruments and platforms used by the U.S. Integrated Ocean Observing System program’s regional associates to collect data on a variety of environmental parameters in the nation’s coastal waters. Data collected by the regional associations are being integrated with other data collected by NOAA’s National Data Buoy Center into a database, which the U.S. Integrated Ocean Observing System program plans to make publicly available. The data are used to produce forecasts, warnings, and atmospheric models. Other uses of the data include scientific and research programs and to assist in emergency response to chemical spills. that are moored at specific locations but able to drift up to 2 miles in all directions. The buoys are located throughout the nation’s oceans and coastal waters. The system has 60 stations in the nation’s coastal zones. Its platforms include buoys or land-based tower stations. The system was designed in the early 1980s in response to the need to maintain meteorological observations in U.S. coastal areas that had previously been gathered by U.S. Coast Guard personnel stationed at lighthouses. The data are used to produce meteorological observations in coastal areas and are relayed to users at least once per hour from the system’s platforms. The system consists of an anchored seafloor bottom pressure recorder and a companion moored surface buoy to detect tsunamis in U.S. coastal areas. The data are transmitted in real- time to NOAA’s Tsunami Warning Centers via satellite. The centers then decide which coastal communities are in danger and issue warnings. Data uses and products The data are collected to help predict El Niño events. They are transferred to shore in real-time using satellites. System description The system is part of the multinational Global Ocean Observing System. It was designed in response to an early 1980s El Niño event, which was neither predicted nor detected until nearly its peak. The system collects data on a variety of meteorological parameters, such as wind speed, and relative humidity, as well as ocean current profiles and upper ocean temperatures. The system detects tsunami activity around Hawaii using sea-level gauges. The data are used to confirm the generation of a tsunami and to predict locations where it may strike. Voluntary Observing Ship The system uses ships, both U.S. and internationally owned, that voluntarily collect data on meteorological conditions. The data are encoded in a standardized format and sent via satellite or radio to services that provide marine weather forecasts. The data are used for a variety of purposes, including weather forecasts, and to help measure extreme weather events and long-term climate changes. The data are also archived for future use by climatologists and other scientists. NOAA uses its aircraft to collect data on multiple environmental parameters, including atmospheric pressure and ocean temperature. The data are used to support global climate change studies, assessing marine mammal populations, surveying coastal erosion, investigating oil spills, flight checking aeronautical charts, and improving hurricane or winter storm prediction models. NOAA’s fleet of ships collects data and supports federal disaster response around the world. The system collects data on multiple environmental parameters, including ocean pH and water depth. The data are used for hydrographic surveys, oceanographic research, and fisheries research, among other things. Initially deployed (fiscal year) The system uses aircraft to collect data via aircraft 30 days a year in coastal Washington state, the Pacific Northwest, and Chesapeake Bay on fish and the thin plankton layer. The data are used to support scientific research on specific parameters such as waves. The data are used to understand long-term trends in the physical and biological state of the Arctic Ocean. of platforms: ocean or sea ice platforms, ships, aircraft, and land based climate atmospheric observatories to study meteorological, sea ice, and subsurface environmental parameters in the Arctic Ocean. The system is used in the Gulf of Alaska, Bering Sea, and Arctic to understand ecosystem dynamics and the life cycle of commercially valuable fish and shellfish stocks. The data are used to produce ecosystem forecasts that help guide resource managers in making catch share allocations to commercial fishermen and to mitigate the effects of climate change on marine species and coastal communities. The system uses free-drifting floats to observe the ocean’s upper 2,000 meters. The mission of the system is to describe and understand the physical processes responsible for climate variability and predictability. The data are transmitted real- time via satellites to land-based receiving stations and are used in weather forecasting and climate prediction models. The system is part of the multinational Global Ocean Observing System and was designed to use drifting buoys around the world to measure sea surface temperature and currents at 15 meters below the water’s surface. The data are used to adjust satellite sea surface temperature observations and provide real- time data on the structure of global surface currents, among other things. The system is part of the multinational Global Ocean Observing System and uses fixed platforms that are stationed on islands and in the coastal zones around the world to measure and report sea-level information. The data are transmitted in real-time using satellites. The data are used in multiple NOAA missions including climate monitoring and prediction. Data uses and products The data are transmitted via satellite in near real-time and are used for developing and improving models of the ocean. System description The system is located in the Atlantic Ocean and is part of the multinational Global Ocean Observing System. It is a buoy system used to study ocean- atmosphere interactions in the tropical Atlantic Ocean that affect regional climate variability on seasonal, interannual, and longer time scales. The system is part of the multinational Global Ocean Observing System. It is a buoy system located in the Indian Ocean that gathers data to address scientific questions related to ocean conditions and monsoons. The system collects data on air temperature, atmospheric pressure, and the direction of ocean currents. The data are real-time measurements that are used in climate research and forecasting. The data are used to help with forecasting long-term climate trends. The system obtains ocean measurements from around the world that contribute to NOAA’s understanding of the carbon cycle and is part of the multinational Global Ocean Observing System.The system is part of the multinational Global Ocean Observing System and uses buoys to maintain long-term observing capabilities on ocean and climate environmental parameters. The data are transmitted by satellites, which enables users (scientists and the public) access in near real-time for use in products such as climate studies. System description The system uses a network of cargo vessels, cruise ships, and research vessels that voluntarily collect ocean measurements by either using NOAA supplied instruments on specified routes or hosting NOAA technicians onboard to take measurements. These ships are also the primary vehicle for deploying NOAA’s drifting buoy arrays, surface drifting buoys, and Argo profiling floats, as well as other instruments. Data are collected on three environmental parameters related to temperature: air, sea, and subsurface. The system uses arrays of hydrophones to collect continuous digital acoustic data for ocean observation in the Pacific Ocean from the Arctic to tropical locations. The hydrophones are used primarily to listen for earthquakes, but can also observe icequakes, large waves, and some marine endangered species. Data uses and products The data collected are used to produce more than 25 articles in peer reviewed scientific publications, scientific meeting presentations, and a large number of other products associated with monitoring the condition of the ocean. The data are used in research related to earthquakes and marine endangered species. The system is located in the Great Lakes and collects data on multiple environmental parameters including water temperature, extent of ice cover, wave direction, and turbidity. The data are used by the National Weather Service to help verify its marine weather forecasts and for fisheries research. The system collects continuous, time series data on the strength of the Atlantic Ocean meridional overturning circulation, which includes many currents. The system uses submarine cables and measurements from ships, among others, to monitor the condition of the currents. The data provide the only measurement of the strength of the meridional overturning circulation and are used to improve climate forecasts. The number of deployed platforms that are currently operational by NOAA. Some observing systems include platforms that are operated by other countries; however; we did not include those platforms in this table. The El Niño Southern Oscillation is a naturally occurring phenomenon that involves fluctuating ocean temperatures in the equatorial Pacific Ocean. The pattern generally fluctuates between two states: warmer than normal temperatures in the central and eastern equatorial Pacific (El Niño) and cooler than normal temperatures in the central and eastern equatorial Pacific (La Niña). A thin plankton layer is an aggregation of phytoplankton on the ocean’s surface. The global carbon cycle is the process by which carbon is exchanged among various systems, including the earth's atmosphere, the oceans, geological sources of stored carbon (e.g., fossil fuels), and the vegetation and soils of the earth's terrestrial ecosystems. Observing system managing office and name National Environmental Satellite, Data, and Information Service Geostationary Operational Environmental Satellite N/O/P Jason Ocean Surface Topography Mission (2,3 & CS) NOAA’s estimated costs for fiscal years 2012 and 2013 were based on final appropriations for those years. The estimates for fiscal year 2014 were based on the fiscal year 2014 President’s budget request. The estimates reported for here include costs to operate and maintain the Jason-2 mission. Development costs for the Jason-3 mission, with the operational environmental satellite scheduled to be launched in fiscal year 2015, are not included. The Physical Oceanographic Real-Time System is a cost-sharing program where local partners provide funding for the sensor systems and their ongoing maintenance. In addition to the individual named above, Stephen D. Secrist (Assistant Director), Cheryl Arvidson, Mark Braza, Heather Dowey, Richard Hung, Paul Kinney, Michael Meleady, and Jeanette Soares made key contributions to this report.
NOAA operates and maintains a portfolio of observing systems to capture the environmental data needed to achieve its diverse missions. Some of these systems focus on the oceans, coasts, and Great Lakes. An observing system is a collection of one or more sensing elements that measures specific environmental conditions and resides on fixed or mobile platforms, such as buoys or satellites. The House Appropriations Committee fiscal year report for the Department of Commerce's 2013 appropriations bill mandated GAO to review NOAA's ocean and coastal data collection systems. This report (1) identifies and describes the ocean, coastal, and Great Lakes observing systems NOAA operates; (2) identifies the annual operations and maintenance costs of these systems for fiscal years 2012 through 2014; and (3) examines the extent to which NOAA has taken steps to integrate and improve the cost-effectiveness of its observing systems portfolio. GAO analyzed agency documentation on, among other things, the characteristics and management of NOAA's observing systems, reviewed cost data from fiscal year 2012 through 2014, and interviewed NOAA officials. The National Oceanic and Atmospheric Administration (NOAA) in the Department of Commerce operates 41 ocean, coastal, and Great Lakes observing systems. NOAA's Office of Oceanic and Atmospheric Research and National Ocean Service manage 25 of these observing systems, with management of the remaining 16 systems split among four other NOAA offices. The majority of NOAA's ocean, coastal, and Great Lakes observing systems use one of three platforms—buoys, satellites, or ships—to collect a range of environmental data, which are used to produce a variety of products, such as weather forecasts and navigational tools. NOAA estimates it spent an average of approximately $430 million annually to operate and maintain its ocean, coastal, and Great Lakes observing systems in fiscal years 2012 through 2014. This is approximately 9 percent of NOAA's total annual appropriations for these years. In reviewing these estimates, GAO found NOAA's annual costs for these observing systems ranged from about $22 million for systems managed by the National Marine Fisheries Service to $198 million for systems managed by the Office of Marine and Aviation Operations in fiscal year 2014. NOAA has not taken all of the steps it has identified as important to integrate and improve the cost-effectiveness of its observing systems portfolio. Since 2002, NOAA has identified the need to move toward an integrated observing systems portfolio. GAO's previous work has found that, in undertaking initiatives such as this, federal agencies can benefit from following leading practices for strategic planning, which include defining goals and performance measures to track progress. NOAA has not, however, developed a plan that clearly sets forth its vision for an integrated observing systems portfolio, the steps it needs to take to achieve this vision, or how it will evaluate its progress. NOAA officials said they have focused on taking specific steps toward integration rather than developing an integration plan. Without a plan, however, NOAA cannot be assured it has established a framework to effectively guide and assess the success of its observing system integration efforts. NOAA has also not assessed whether its observing systems are collecting unnecessarily duplicative data even though NOAA documents have identified the need to reduce duplication. NOAA officials told GAO that duplication is not a significant problem requiring further analysis. However, in the absence of an analysis, NOAA cannot know whether it is missing opportunities to achieve cost savings. NOAA has taken steps to integrate the management of its observing systems, including creating an observing systems council to provide a more centralized perspective on systems management. The agency has also developed analytical tools to assess its observing system capabilities and requirements, including a model to analyze investment options. Reliable cost data are needed to ensure the most accurate results from this model, but NOAA does not have a standard methodology for tracking its observing systems costs. NOAA officials said the agency is considering developing a better method for tracking observing system costs but has not established a time frame for doing so. Without accurate and consistent cost information, it will be difficult for NOAA to reliably compare the cost-effectiveness of its observing systems and make informed investment decisions. GAO recommends that NOAA develop a plan to guide the integration of its observing systems, analyze whether unnecessary duplication exists in its observing systems portfolio, and develop a standardized methodology for the routine preparation and reporting of observing systems costs. NOAA generally agreed with the recommendations.
To address the problems associated with unstable forms of plutonium and inadequate packaging for long-term storage, DOE established a standard for the safe storage of plutonium for a minimum of 50 years that sets plutonium stabilization and packaging requirements. Stabilization is achieved by heating the material to remove moisture that could lead to a buildup of pressure, which would increase the risk of rupturing a container. Plutonium storage containers designed to meet the standard consist of an inner and outer container, each welded shut. The inner container is designed so that it can be monitored for a buildup of pressure using analytical techniques, such as radiography, that do not damage the container. Containers must also be resistant to fire, leakage, and corrosion. Plutonium stabilization and packaging are completed at Rocky Flats, Hanford, and SRS, and SRS has already received nearly 1,900 containers from Rocky Flats. Stabilization and packaging are still ongoing at Lawrence Livermore and Los Alamos National Laboratories. Once stabilization and packaging are completed, DOE estimates that it will have nearly 5,700 plutonium storage containers stored at locations across the United States that could eventually be shipped to SRS. SRS’s plutonium storage plans originally called for the construction of a state-of-the-art Actinide Packaging and Storage Facility that would have provided long-term storage and monitoring of standard plutonium containers in a secure environment. DOE changed its storage plans and cancelled the project in 2001 because it expected to store the plutonium for only a few years until a facility to process the plutonium for permanent disposition was available. Instead of building a new facility, DOE decided to use two existing buildings at SRS for plutonium storage and monitoring operations: Building 105-K and Building 235-F. Building 105-K was originally a nuclear reactor built in the early 1950s and produced plutonium and tritium until 1988. The reactor was then placed in a cold standby condition until its complete shutdown in 1996. The major reactor components were removed and the facility is now primarily used to store plutonium and highly enriched uranium. Building 235-F was also constructed in the 1950s and was used until the mid-1980s to produce plutonium heat sources that were used to power space probes for the National Aeronautics and Space Administration and the Department of Defense. The building is currently used to store plutonium. After the design basis threat was changed in October 2004, SRS was forced once again to reevaluate its storage plans. Because the new design basis threat substantially increased the potential threat that SRS must defend against, Building 105-K and Building 235-F would need extensive and expensive upgrades to comply with the new requirements. SRS estimated the total cost of this additional security at over $300 million. SRS further estimated that it could save more than $120 million by not using Building 235-F for storage and therefore decided in April 2005 to consolidate plutonium storage in Building 105-K. DOE cannot consolidate its excess plutonium at SRS for several reasons. First, DOE has not completed a plan to process the plutonium into a form for permanent disposition, as required by the National Defense Authorization Act for Fiscal Year 2002. DOE proposed two facilities at SRS to process its surplus plutonium into a form for permanent disposition: a mixed oxide fuel fabrication facility to convert plutonium into fuel rods for use in nuclear power plants and a plutonium immobilization plant where plutonium would be mixed with ceramics, the mixture placed in large canisters, and the canisters then filled with high- level radioactive waste. The canisters would then be permanently disposed of at Yucca Mountain. In 2002, citing budgetary constraints, DOE cancelled the plutonium immobilization plant, eliminating the pathway to process its most heavily contaminated plutonium into a form suitable for permanent disposition. Section 3155 of the act provides that if DOE decides not to construct either of two proposed plutonium disposition facilities at SRS, DOE is prohibited from shipping plutonium to SRS until a plan to process the material for permanent disposition is developed and submitted to the Congress. To date, DOE has not developed a disposition plan for the plutonium that would have been processed in the immobilization plant. In its fiscal year 2006 budget, DOE requested $10 million to initiate conceptual design of a facility that would process this plutonium. However, it is uncertain when this design work would be completed and a plan prepared. Second, even if a plan to process this plutonium for permanent disposition had been developed and DOE were able to ship the plutonium, SRS would still be unable to accommodate some of Hanford’s plutonium because Hanford’s accelerated cleanup plans and SRS’s storage plans are inconsistent with one another. DOE approved both plans even though Hanford’s accelerated cleanup plan called for shipping some of its plutonium to SRS in a form that SRS had not planned on storing. Hanford stores nearly one-fifth of its plutonium in the form of 12-foot-long nuclear fuel rods, with the remainder in about 2,300 DOE standard 5-inch- wide, 10-inch-long storage containers. The fuel rods were to be used in Hanford’s Fast Flux Test Facility reactor. The reactor has been closed, and the fuel rods were never used. Hanford’s plutonium is currently being stored at the site’s Plutonium Finishing Plant—the storage containers in vaults and the nuclear fuel rods in large casks inside a fenced area. Hanford was preparing to ship plutonium to SRS as part of its efforts to accelerate the cleanup and demolition of its closed nuclear facilities. Although Hanford’s original cleanup plan called for demolishing the Plutonium Finishing Plant by 2038, the plan was modified in 2002 to accelerate the site’s cleanup. Hanford’s accelerated cleanup plan that was approved by DOE’s Office of Environmental Management now calls for shipping the storage containers and nuclear fuel rods to SRS by the end of fiscal year 2006 so that Hanford can demolish the Plutonium Finishing Plant by the end of fiscal year 2008. To meet the new deadline, Hanford planned to ship the fuel rods intact to SRS. Nevertheless, SRS’s July 2004 plutonium storage plan stated that Hanford would cut the fuel rods and package the plutonium in approximately 1,000 DOE standard storage containers before shipping the material to SRS. Although Building 105-K has space to store the fuel rods intact, several steps would be necessary before DOE could ship the fuel rods from Hanford to SRS. First, there is currently no Department of Transportation- certified shipping container that could be used to package and ship the fuel rods. In addition, SRS would be required, among other things, to prepare the appropriate analyses and documentation under the National Environmental Policy Act and update Building 105-K’s safety documentation to include storage of the fuel rods. Wherever the fuel rods are stored, they would have to be disassembled before processing the plutonium for permanent disposition. Hanford and SRS currently lack the capability to disassemble the fuel rods, but DOE plans to study establishing that capability at SRS as part of its conceptual design of a facility to process the plutonium for disposition. The challenges DOE faces storing its plutonium stem from the department’s failure to adequately plan for plutonium consolidation. DOE has not developed a complexwide, comprehensive strategy for plutonium consolidation and disposition that accounts for each of its facilities’ requirements and capabilities. Until DOE is able to develop a permanent disposition plan, additional plutonium cannot be shipped to SRS, and DOE will not achieve the cost savings and security improvements that plutonium consolidation could offer. According to DOE officials, the impact of continued storage at Los Alamos and Lawrence Livermore will be relatively minor because both laboratories had already planned to maintain plutonium storage facilities for other laboratory missions. However, according to Hanford officials, continued storage at Hanford could cost approximately $85 million more annually because of increasing security requirements and will threaten the achievement of the goals in the site’s accelerated cleanup plan. Specifically, maintaining storage vaults at Hanford’s Plutonium Finishing Plant will prevent the site from demolishing the plant as scheduled by September 2008. Under DOE’s plutonium storage standard, storage containers must be periodically monitored to ensure continued safe storage. Without a monitoring capability that can detect whether storage containers are at risk of rupturing, there is an increased risk of an accidental plutonium release that could harm workers, the public, and the environment. Monitoring activities must occur in a facility that, among other things, is equipped to confine accidentally released plutonium through effective ventilation and appropriate filters. In addition, the facility must have a fire protection system to protect storage containers and prevent their contents from being released in a major fire. According to the Safety Board, Building 105-K is not currently equipped with adequate ventilation or fire protection. Specifically, SRS removed the High-Efficiency Particulate Air (HEPA) filters that were used in the building’s ventilation system when it was a nuclear reactor. Such filters could prevent plutonium from escaping the building in the event of a release from the storage containers. In addition, Building 105-K lacks automatic fire detection or suppression systems. As a result, plutonium storage containers cannot safely be removed from inside the outer packaging used to ship the containers to SRS. The outer package—a 35- gallon steel drum—is used to ship a single storage container and is designed to resist damage during transportation and handling. The outer package confines the plutonium in the event the storage container inside is breached. In addition, the outer package provides an additional layer of protection from fire for the storage container inside. Because monitoring requires x-raying individual storage containers and, in some cases, puncturing and cutting storage containers to analyze the condition of the container and the plutonium within, the storage containers must be removed from their outer packaging. SRS plans to establish a capability to restabilize the plutonium by heating it in a specialized furnace in the event monitoring determines that the stored plutonium is becoming unstable (i.e., increasing the risk of rupturing a storage container). The restablized plutonium would then be packaged into new storage containers. The only facility at SRS currently capable of restabilizing and repackaging the plutonium has closed in preparation for decommissioning. Because Building 105-K does not have the capability to monitor storage containers, DOE had planned to install monitoring equipment in Building 235-F at SRS. Building 235-F was chosen primarily because it was already equipped with filtered ventilation systems appropriate to handling plutonium—multiple and redundant air supply and exhaust fan systems that use HEPA filters. Exhaust from the ventilation system is further filtered through a sand filter before entering the outside atmosphere. Currently, Building 235-F is limited to removing storage containers from their outer packaging and x-raying the containers to evaluate potential pressurization. Although DOE has installed equipment in Building 235-F that can puncture the storage container to relieve pressure, Building 235-F currently lacks the capability to conduct destructive examinations. Destructive examinations consist of cutting containers open to take samples of and analyze the gases inside and examining the containers themselves for indications of corrosion. In addition, destructive examination allows plutonium inside the container to be analyzed to detect any changes in the plutonium’s condition. Building 235-F also currently lacks the capability to restabilize and repackage plutonium. In addition, Building 235-F faced several other challenges that would have affected its ability to monitor plutonium. Because of changes in the design basis threat, Building 235-F would not have had sufficient security to store Category I quantities of plutonium. SRS officials estimate that 972 storage containers contain Category I quantities of plutonium metal. Although these storage containers are at relatively low risk for rupture, SRS would have been unable to remove those containers from Building 105-K to monitor their condition. According to SRS officials, security measures could have been established in Building 235-F if a safety issue had arisen that required opening a Category I container. Furthermore, the Safety Board identified a number of serious safety concerns with Building 235-F. Specifically, the Safety Board reported the following: The building lacks fire suppression systems, and many areas of the building lack fire detection and alarm systems. The building’s nuclear criticality accident alarm system has been removed. A nuclear criticality accident occurs when enough fissile material, such as plutonium, is brought together to cause a sustained nuclear chain reaction. The immediate result of a nuclear criticality accident is the production of an uncontrolled and unpredictable radiation source that can be lethal to people who are nearby. A number of the building’s safety systems depend upon electrical cables that are approximately 50 years old and have exceeded their estimated life. When electrical cables age, they become brittle and may crack, increasing the potential for failure. SRS has discovered two areas in the soil near the building that could present a hazard in the event of an earthquake. The building’s ventilation system still contains plutonium from its previous mission of producing plutonium heat sources to power space probes. This highly radioactive plutonium could be released, for example, during a fire or earthquake and could pose a hazard to workers in the building. Once again, DOE’s monitoring challenges demonstrate its failure to adequately plan for plutonium consolidation. Instead of a comprehensive strategy that assessed the monitoring capabilities needed to meet its storage standard, DOE’s plans went from constructing a state-of-the-art storage and monitoring facility to using a building that the Safety Board had significant concerns with. Moreover, DOE’s plans have subsequently changed again. In April 2005, after spending over $15 million to begin modifications to Building 235-F, DOE announced that it would only use the building to monitor plutonium temporarily. Now, DOE plans to install the necessary safety systems and monitoring equipment in Building 105-K, a 50-year-old building that was not designed for such functions. This decision underscores that DOE’s lack of careful planning has forced SRS to focus on what can be done with existing facilities, eliminating options that could have been both more cost-effective and safer than current plans. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information on this testimony, please contact Gene Aloise at (202) 512-3841 or aloisee@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. Sherry McDonald, Assistant Director; and Ryan T. Coles made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Plutonium is very hazardous to human health and the environment and requires extensive security because of its potential use in a nuclear weapon. The Department of Energy (DOE) stores about 50 metric tons of plutonium that is no longer needed by the United States for nuclear weapons. Some of this plutonium is in the form of contaminated metal, oxides, solutions, and residues remaining from the nuclear weapons production process. To improve security and reduce storage costs, DOE plans to establish enough storage capacity at its Savannah River Site (SRS) in the event it decides to consolidate its plutonium there until it can be permanently disposed of. GAO was asked to examine (1) the extent to which DOE can consolidate this plutonium at SRS and (2) SRS's capacity to monitor plutonium storage containers. As GAO reported in July 2005, DOE cannot yet consolidate its surplus plutonium at SRS for several reasons. First, DOE has not completed a plan to process the plutonium into a form for permanent disposition, as required by the National Defense Authorization Act for Fiscal Year 2002. Without such a plan, DOE cannot ship additional plutonium to SRS. Second, SRS cannot receive all of the plutonium from DOE's Hanford Site because it is not in a form SRS planned to store. Specifically, about 20 percent of Hanford's plutonium is in the form of 12-foot-long nuclear fuel rods, which Hanford had planned to ship intact to SRS as part of its efforts to cleanup and demolish its closed nuclear facilities. However, SRS's storage plan assumed Hanford would package all of its plutonium in DOE's standard storage containers. Until a permanent disposition plan is developed, more plutonium cannot be shipped to SRS and DOE will not achieve the cost savings and security improvements that consolidation could offer. In particular, continued storage at Hanford will cost approximately $85 million more annually because of increasing security requirements and will threaten that site's achievement of the milestones in its accelerated cleanup plan. In addition, DOE lacks the necessary capability to fully monitor the condition of the plutonium to ensure continued safe storage. The facility at SRS that DOE plans to use to store plutonium lacks adequate safety systems to conduct monitoring of storage containers. Without a monitoring capability, DOE faces increased risks of an accidental plutonium release that could harm workers, the public, and the environment. DOE had planned to construct a monitoring capability in another building at SRS that already had safety systems needed to work with plutonium. However, this building would not have had sufficient security to conduct all of the required monitoring activities. In addition, this building also has other serious safety problems. Faced with these challenges, DOE announced in April 2005 that it would have SRS's storage facility upgraded to conduct plutonium monitoring.
DOD buys hand tools for a wide range of maintenance and repair activities that include maintaining everything from facilities and vehicles to aircraft and ships. DOD buys tools either from GSA or by local purchase. DOD regulations state that use of established supply sources, such as GSA, should be maximized. If the supply system cannot be used, local purchases may be considered if they are in the best interest of the government in terms of the combination of quality, timeliness, and cost. DOD aircraft maintenance units use silhouetted tool boxes and displays, which contain shadow drawings of the tools, to control tools at the user level and prevent foreign object damage to aircraft resulting from tools left in or on an aircraft during maintenance. Generally, the tool box or kit has a foam insert in each drawer that is cut and shaped to the size of the tools to facilitate the physical inventories taken at the time a mechanic checks out and returns the tool box to the tool room (see fig. 1.) Other military units, such as artillery and transportation units, use tool boxes that do not maintain the tools as neatly and are less easily inventoried. Executive agencies are required to establish and maintain systems of internal controls that provide reasonable assurance that resource use is consistent with applicable laws, regulations, and policies; resources are safeguarded to prevent waste, loss, and misuse; transactions and other events are adequately documented and fairly disclosed; and resources are accounted for. With regard to hand tools, basic internal controls should include prior authorization of specific tool purchases by an individual knowledgeable of a unit’s tool needs; independent checks to ensure that tool purchases are properly received; and accurate inventory records to reflect tool receipts, issues, and on-hand quantities. DOD has not issued guidance establishing controls over hand tools at the user level. DOD does have overall guidance for the physical security of government property located at military installations, but the guidance does not contain specific procedures for controlling tool purchases, inventories, and related receipts and issues. The military services also have not provided adequate guidance to installations and operating units. Other than requiring periodic physical inventories, the guidance does not provide specific controls over hand tools. The Air Force has recognized the need for better guidance and, in November 1993, established an Air Force Tool Committee to develop new guidance for use Air Force-wide. Although guidance on tool purchases and inventories is lacking, the military services have issued guidance to prevent foreign object damage to aircraft from tools left on or in aircraft during maintenance. Air Force mechanics are required to sign out for tool kits or individual tools, and an inventory of the tools is performed. After the work is completed, the mechanics return the tools, and the contents again are inventoried to ensure that none are left in the aircraft. The Navy and the Marine Corps use a similar system to prevent foreign object damage. DOD has insufficient cost data at the headquarters, command, and installation levels to identify and track hand tool purchases, inventory levels, and losses. Also, because the military services consider hand tools to be expendable items representing small dollar values, losses that are identified by operating units often are not reported to investigative organizations and higher commands. DOD headquarters does not maintain cost information reflecting hand tool purchases, inventory levels, and losses. DOD does report information on losses of all government property annually to the Congress, but DOD officials told us that the reported information includes very limited data on hand tools because such losses often get little visibility and generally do not meet the minimum reporting threshold of $1,000 per incident. Representatives at the headquarters of all of the military services and the commands we visited also told us that they do not receive or maintain information that reflects hand tool purchases, inventory levels, or losses. The representatives stated that they do not manage down to that level and that such information only would be available at the installation level. However, we visited Fort Bragg, Oceana Naval Air Station, Langley Air Force Base, and Camp Lejeune Marine Corps Base and found that data were very limited at the installation and operating unit levels. At our request, certain units compiled data on the amount of recent tool purchases. For example, one unit at the Oceana Naval Air Station was able to provide lists of individual tool purchases for 20 months that totaled $25,844. However, most of the installations and units visited did not know the value of the tool inventories and could not provide complete data on tool purchases. For example, units at Camp Lejeune had data reflecting the number and types of tools owned but did not know the value of the tool inventories. Agents at the security investigating organization of each installation told us that they maintain a log of all investigations of suspected stolen government property but do not report such losses to anyone. The logs, which are used to monitor trends in thefts and other crimes, include very little information on tool losses. Reports prepared to document losses of government property and provide the basis for an investigation of the reasons for the losses generally were not prepared for tools due to the small dollar values involved. At Langley Air Force Base, for example, these reports had not included any hand tools for the past 2 years. The absence of adequate management guidance has contributed to a general lack of basic internal controls at individual installations and operating units. We identified weaknesses in basic internal controls at each of the four installations and eight operating units we visited. These weaknesses related to purchase authorizations and inventory record-keeping. All units we visited required prior authorization for specific tool purchases except for the two units at Langley Air Force Base. Instead, personnel used a blanket authorization from the unit that was entered into the base service store’s computer system. One squadron we visited authorized six persons to buy tools at the base service store, and the other, smaller unit we visited authorized two persons to buy tools. Some of the personnel authorized to purchase tools also were responsible for establishing the tool requirements for the unit. Further, unit personnel not involved in the purchases did not routinely check to see that the unit actually received the tools. Without these controls, there was no assurance that the purchases were necessary or that the unit received the tools. At all units we visited, either inventory records were inaccurate or no records were available that could be used to identify and track hand tool purchases and related receipts, issues, and on-hand quantities. The only records the Army units we visited at Fort Bragg could provide were (1) copies of a register showing a list of all items purchased by the units, including hand tools, and (2) hand receipts showing the authorized and on-hand quantities of tools in tool rooms, trailers, and boxes that were assigned to specific individuals in the units. No inventory records were available to show tool receipts and issues or the disposition of the purchases. The Air Force units we visited at Langley Air Force Base did not have records showing receipts and issues. They only had computer-generated lists of the current inventory of tools in each tool box or tool room drawer. Some of these lists were not dated and did not accurately reflect the total number of tool boxes on hand. For example, one unit’s undated documentation stated that six avionics tool boxes with 65 tools in each box were on hand. However, our physical inspection disclosed that 10 tool boxes were on hand. The Marine Corps units we visited at Camp Lejeune did not have inventory records showing tool purchases and related receipts and issues. Both units had stock lists of tools in each tool box. One unit also had hand receipts for spare tools in the tool room, and the other unit had inventory cards for the spare tools. One Navy unit at Oceana Naval Air Station had established an automated system for monitoring on-hand quantities of tools in its tool room and tool kits. However, this system did not reflect tool receipts and issues. The other Oceana unit had set up a manual inventory record system about 6 months prior to our visit to get better control over purchases, receipts, issues, and inventory levels for the tool room. However, we found that the records were not accurate. For example, some tool purchases were not entered on the inventory record cards before they were issued to users. In June 1994, the squadron commander revised the unit’s procedures to tighten the controls over tool purchases and inventories. We made several physical counts at each operating unit we visited to test the accuracy of the records that were available. We found inaccuracies at each unit, with discrepancy rates of up to 68 percent. In total, the records for 99 of 515 tools in the tool rooms (19 percent) were inaccurate, and the records for 173 of 2,700 tools in the tool boxes (6 percent) were inaccurate. For example, the inventory records at one unit indicated that nine diagonal cut pliers were on hand, but our physical count showed that six pliers actually were on hand. We requested the results of physical inventories by the military services and found that they often were not documented. Personnel in the Air Force and Navy units and one of the Marine Corps units stated that they conducted physical inventories but did not maintain documentation of the results of these inventories. Personnel in the Army units and one of the Marine Corps units told us that they conducted the required periodic physical inventories and that the results were reflected on hand receipts. We reviewed the documents and noted that some missing tools had been identified. DOD has provided only limited oversight to determine how effectively installations and operating units control tool purchases and inventories since the last comprehensive DOD Inspector General review of this area was performed over 10 years ago. This review identified a need for better procedures and controls. No comprehensive reviews have been made since that time, and audit efforts have been limited to local reviews at individual installations. During recent years, the Army and Navy audit agencies have done only one or two local audits while the Air Force audit agency has performed 35 local audits since 1989. The audits identified problems with the controls over hand tools. Routine inspections and surveys by command level management and inspector general staff also generally do not include an evaluation of tool procedures and controls. For example, an inspector general representative of the Air Force’s Air Combat Command told us that the inspector general’s policy is not to perform compliance type inspections and reviews and that the staff did not have any knowledge of the adequacy of hand tool controls. We did find that the Marine Corps’ Field Supply and Maintenance Analysis Office performs periodic inspections at units, which include tool controls. The inspections disclosed deficiencies in these controls during the past 3 years relating to the lack of inventory records, absence of physical inventories, and accumulation of excess tools. We recommend that the Secretary of Defense take the following actions to ensure that hand tool purchases and inventories are adequately controlled: Require that the military services and major commands provide guidance to installations and operating units specifying the needed internal controls over hand tools. These controls should include requirements for prior authorization of tool purchases and maintenance of accurate inventory records that reflect tool receipts, issues, and quantities on hand. Require that inspector general and internal audit staffs incorporate controls over hand tools into the periodic inspections that are performed at installations and operating units. We are not recommending that DOD and the military services obtain and report overall cost information on tool purchases, inventory levels, and losses. If military units put adequate internal controls in place, including accurate inventory records, such information should be readily available at the installation and unit levels. DOD agreed that, to varying degrees, the military services’ policies and procedures governing the purchase and accountability of hand tools are inadequate (see app. II). DOD also agreed that internal controls should be reviewed by the services and strengthened as necessary. By March 31, 1995, DOD expects to issue a memorandum to the military services directing that closer scrutiny be paid to hand tool accountability and that regulations, policies, and procedures governing hand tool purchases be strengthened. The memorandum also will direct that each military service secretary advise their inspector general and internal audit staffs to incorporate control of hand tools in periodic inspections at installations and operating units. Although generally agreeing with our report, DOD did question some aspects. DOD believes that our findings were insufficient to indicate a systemic problem with inadequate inventory accountability and record-keeping. DOD also believes that our findings reflect a problem with implementation of existing guidance and that no additional guidance is needed for existing hand tool inventories. Because our review was limited to four installations and eight operating units, we cannot state unequivocally that our findings indicate a systemic problem. However, we did identify problems at each location visited, which would indicate to us that similar problems may exist elsewhere. With regard to the adequacy of guidance, we continue to believe that additional guidance is needed. Personnel at the units visited consistently stated that one of the reasons for the problems we noted was the lack of guidance specifying the internal controls needed for receipts, issues, and inventories at the operating units. Furthermore, individual services, such as the Air Force, acknowledge the need for better guidance. We are sending copies of this report to the Chairmen and Ranking Minority Members, House and Senate Committees on Appropriations, Senate Committees on Armed Services and on Governmental Affairs, and House Committee on Government Reform and Oversight; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Director, Office of Management and Budget. Please contact me at (202) 512-5140 if you have any questions. The major contributors to this report are listed in appendix III. We reviewed the Department of Defense’s (DOD) and the military services’ policies and procedures for controlling hand tools. We discussed program operations, guidance, and oversight with officials at the headquarters of DOD and each of the military services and obtained overall program data when available. We also visited the General Services Administration to discuss its functions as federal manager for hand tools and to obtain available information on tool sales to DOD. We visited one installation in each of the military services—Langley Air Force Base, Virginia; Oceana Naval Air Station, Virginia Beach, Virginia; Fort Bragg, Fayetteville, North Carolina; and Camp Lejeune Marine Corps Base, Jacksonville, North Carolina—to review controls over hand tools. At each installation, we (1) requested overall information on hand tool purchases, inventories, and losses and (2) visited two operating units to evaluate internal controls over hand tools. We visited the following units at each installation: Langley Air Force Base 94th Fighter Squadron, 1st Fighter Wing 72nd Helicopter Squadron, 1st Fighter Wing Oceana Naval Air Station Aircraft Intermediate Maintenance Department Fighter Squadron VF-41, Fighter Wing, U. S. Atlantic Fleet Fort Bragg 2nd Battalion, 504th Parachute Infantry Regiment, 82nd Airborne Division 546th Transportation Company, 189th Maintenance Battalion, 1st Corps Support Command Camp Lejeune Marine Corps Base 1st Battalion, 10th Artillery Regiment, 2nd Marine Division 464th Helicopter Squadron, 29th Marine Air Group, 2nd Marine Aircraft Wing At each unit, we (1) discussed internal controls with unit personnel, (2) reviewed available documents related to tool purchases and inventories, and (3) made physical counts to test the accuracy of inventory records. We also contacted the major command responsible for each installation visited and obtained overall information related to hand tools. As part of our evaluation of management oversight, we contacted inspector general offices, military audit services, and investigative organizations to discuss their oversight of hand tool controls and review audit, inspection, and investigative reports. We performed our review between February and October 1994 in accordance with generally accepted government auditing standards. Larry Peacock, Evaluator-in-Charge Linda Koetter, Evaluator Dawn Godfrey, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the military services' controls over hand tools, focusing on: (1) whether the services' policies and procedures for preventing the loss or unnecessary purchase of hand tools are adequate; (2) whether information is available on the costs associated with missing, lost, and stolen hand tools; and (3) the extent to which military installations control hand tool inventories and review their hand tool controls. GAO found that: (1) it cannot determine the extent to which DOD purchases unnecessary hand tools because DOD does not maintain cost data or track tool purchases and inventory levels; (2) DOD and the military services have not provided sufficient guidance and oversight to ensure that hand tools are adequately safeguarded and controlled at military installations; (3) military units do not have adequate internal controls or records to properly account for tool purchases and manage inventories; (4) many military units permitted personnel to purchase tools without prior authorization, could not identify tool purchases and trace them to inventory records, and had discrepancies between available inventory records and actual tool quantities; (5) Air Force operating units purchased many new warranted tools that were unnecessary and made local purchases even though the tools were available through normal DOD supply channels; and (6) in response to a GAO recommendation, the Air Force advised its major commands to comply with established acquisition policies and procedures.
The United States, along with its coalition partners and various international organizations and donors, has embarked on a significant effort to rebuild Iraq. As of October 2006, the United States had obligated about $29 billion for reconstruction and stabilization efforts in Iraq. The United States has relied heavily on private sector contractors to provide the goods and services needed to support the reconstruction efforts in Iraq. Congress has appropriated substantial amounts to support rebuilding efforts such as restoring Iraq’s oil and electric infrastructures, assisting in developing a market-based economy, and improving the country’s health, education, and medical services. With regard to Iraq’s oil sector, U.S. support has included efforts to (1) restore Iraq’s oil infrastructure to sustainable prewar crude oil production and export capacity, and (2) deliver and distribute refined fuels for domestic consumption. Specific U.S. activities and projects for the restoration of Iraq’s oil production and export capacity include repairing the Al-Fathah oil pipeline crossing, restoring several gas and/or oil separation plants near Kirkuk and Basrah, and repairing natural gas and liquefied petroleum gas plant facilities in southern Iraq. U.S. activities also include the restoration of wells, pump stations, compressor stations, export terminals, and refineries, and providing electrical power to many of these oil facilities. In addition to infrastructure restoration activities, from late May 2003 through August 2004, the United States facilitated and oversaw the purchase, delivery, and distribution of refined fuels throughout Iraq, primarily funded using the Development Fund for Iraq (DFI). These imports—used for cooking, heating, personal transportation, and private power generation—were required to supplement domestic production due to increased demand and Iraq’s limited refining capacity. In early 2003, DOD assigned the Corps the responsibility of the oil restoration activities known as Restore Iraqi Oil. In March 2003 the Corps awarded a cost-plus-award-fee contract, referred to as the RIO I contract, to support the oil restoration mission. Under this contract, the Corps awarded 10 task orders to the contractor worth a total of $2.5 billion. Two task orders related to oil restoration planning and extinguishing oil fires; two task orders were for the construction and repair of the oil infrastructure; one was for life support activities, such as lodging and dining services; and five task orders were for the importation, delivery, and distribution of refined fuels throughout Iraq. At the request of the Corps, DCAA audited the contractor’s proposals for the RIO I contract. DCAA performs many types of audits for DOD, including audits of contractor proposals, audits of estimating and accounting systems, and incurred cost audits. Generally, the results of DCAA audits of contractor proposals are intended to assist contracting officials in negotiating reasonable contract prices. Typically, DCAA audits contractors’ proposals and provides contracting officials advice on the reasonableness of contractor costs prior to negotiations. DCAA also conducts audits of cost-type contracts after they are negotiated to ensure costs incurred on these contracts are acceptable. Relying on cost information provided by the contractor and assessing whether the costs comply with government regulations, DCAA may identify certain costs as questioned. DCAA defines questioned costs as costs considered to be not acceptable for negotiating a fair and reasonable contract price. DCAA reports its findings to contracting officers for consideration in negotiating fair and reasonable contract prices. DCAA audit reports represent one way DCAA can assist contracting officials as they negotiate government contracts. Also, contracting officials may invite DCAA to participate in contract negotiations to explain audit findings and recommendations. DCAA’s role is advisory, and the contracting officer is responsible for ensuring that the contractor’s proposed price is fair and reasonable. While DCAA audit recommendations are nonbinding, federal regulations specify that when significant audit recommendations are not adopted, the contracting officer should provide rationale that supports the negotiation result in the price negotiation documentation. In its final 11 audits of the 10 task orders, DCAA identified $221 million in questioned costs on the RIO I contract. In total, DCAA issued 22 proposal audits of the RIO I contract because DCAA audited multiple proposals for some of the task orders. The final 11 audits included one audit of each task order and an audit of a contractor claim on the life support task order. Nearly 80 percent of the questioned costs related to the costs paid for fuel and fuel delivery. For example, DCAA questioned $139 million of the costs the contractor paid for fuel and fuel transportation in Kuwait based on a comparison of the price paid by the contractor and the price paid by the Defense Energy Support Center (DESC) when it took over the mission for the contractor in April 2004. Figure 1 outlines the reasons for DCAA’s questioned costs on the RIO I contract. The RIO I contract provided for payment of a fixed fee of 2 percent of the negotiated estimated contract cost plus an award fee amount of up to 5 percent, based on the government’s evaluation of the contractor’s performance. Award fee contracts allow an agency to adjust the amount of fee paid based on contractor performance. The award fee is intended to motivate excellence in contractor performance, and can also serve as a tool to control program risk and cost. However, the monitoring and evaluation of contractor performance necessary under an award fee contract requires additional administrative effort and cost, and federal regulations provide that the use of such a contract is suitable when the expected benefits of an award fee contract are sufficient to warrant this additional effort and cost. In general, for award fee contracts, DOD personnel (usually members of an award fee evaluation board) conduct periodic evaluations of the contractor’s performance against specified criteria in an award fee plan and recommend the amount of fee to be paid. These evaluations are informed by input provided by government personnel who directly observe the contractor’s performance. Typically, award fee contracts emphasize multiple aspects of contractor performance, such as quality, timeliness, technical ingenuity, and cost-effective management. Because award fees are intended to motivate contractor performance in areas that are susceptible to judgmental and qualitative measurement and evaluation, these criteria and evaluations tend to be subjective. After receiving the recommendation of the award fee evaluation board, a fee-determining official makes the final decision on the amount of fee the contractor will receive. In certain cases the fee-determining official may also decide to move unearned award fee from one evaluation period to a subsequent evaluation period or periods, thus providing the contractor an additional opportunity to earn previously unearned fee—a practice called rollover. DOD considered DCAA’s audit findings and conducted additional analysis before deciding to pay the RIO I contractor nearly all of the $221 million in costs that DCAA questioned, and to remove $112 million from the amount used to establish the contractor’s fixed and award fees. The reduction in the amount used to establish the fee pool resulted in an effective reduction of the contractor’s fee by about $5.8 million. DOD’s decision to pay most questioned costs was shaped by the fact that negotiations did not begin until most of the work was complete and the costs had already been incurred. The delay in negotiations was influenced by factors such as changing requirements, funding challenges, and problems with the contractor’s business systems. DCAA considers $26 million of the costs questioned on the RIO I contract to be sustained, which DCAA defines as cost reductions directly attributable to its questioned cost findings. We compared the sustention rates on DCAA’s 11 RIO I contract audits to the sustention rates for 100 DCAA audits of other Iraq contract actions, and found that the sustention rates varied widely for both groups. To address the $221 million in costs questioned by DCAA, DOD collected additional information and conducted additional analysis. For example, after DCAA issued its final audits, DOD collected additional information related to the difference in costs paid by the contractor and those paid by DESC for fuel and fuel delivery from Kuwait, as well as price adjustments the contractor paid to the subcontractor for fuel from Turkey, the two largest reasons for questioned costs. The DOD contracting officer also convened a meeting with contractor representatives, DCAA officials, and other Corps officials to discuss the additional information. As a result of the additional information and analysis presented in the meeting, the DOD contracting officer asked DCAA to conduct financial analyses to quantify options—referred to as financial positions—that he could use in developing the government’s objectives for negotiations with the contractor. The financial positions differed from DCAA’s final audit reports in some areas, for example, reflecting a narrower gap between the costs paid by the contractor and the costs paid by DESC for the fuel and fuel delivery from Kuwait. DOD decided to address the $221 million in questioned costs in the following ways: Pay both the costs and fees. The DOD contracting officer decided to pay the contractor costs and associated fees for nearly half of the costs questioned by DCAA. In general, these costs reflected the financial positions prepared for negotiations by DCAA after DOD collected additional information about some of the questioned costs. For example, although DCAA’s final audits questioned the costs paid for fuel from Turkey, the financial positions did not include reductions for these costs. The contracting officer used the financial positions as a basis for deciding to pay the contractor for the costs for fuel from Turkey. Not pay the contractor costs or fees. For less than $10 million of the questioned costs, DOD decided not to pay the contractor for its costs and the associated fees. For example, the Corps decided not to reimburse approximately $4 million the contractor spent on leasing diesel trucks that were not used. Pay the costs but not the fees. For almost half of the questioned costs, DOD decided to pay the contractor but removed those costs from the amount used to calculate the contractor’s fee. These costs were composed primarily of the difference that remained between the prices paid by the contractor and by DESC for fuel and fuel delivery from Kuwait after the contracting officer took into account the financial positions. When asked about the reason for paying for questioned costs but removing those same costs from the amount used to establish the contractor’s fee, the DOD contracting officer told us that this outcome was a result of negotiations. He stated that while the contractor probably did not do everything it could have to lower prices, it took reasonable actions to do so. For example, Corps officials stated that the contractor attempted to obtain lower prices for the fuel and fuel delivery from Kuwait through competition on several occasions. Also, the officials told us that DOD decided to pay for these questioned costs because it felt that it would have been unlikely to prevail in an attempt not to pay costs that had already been incurred by the contractor. Specifically, Corps officials told us they believed that in the event of litigation, they would have been ordered to pay the contractor for incurred costs because, for example, the Corps continually directed the contractor to perform work under the contract. However, these officials told us they believed there was adequate justification to negotiate the exclusion of some questioned costs from fee eligibility. The DOD contracting officer also believed there were several limitations to the primary reason for DCAA’s questioned costs—the comparison of the price paid by the contractor for fuel and fuel delivery from Kuwait to the price paid by DESC, which took over the fuel importation mission in 2004. Specifically, the contracting officer attributed the contractor’s higher price to factors such as the Kuwaiti subcontractor’s perception of the risk of working in Iraq, short-term subcontracts for the fuel and fuel delivery because of the incremental funding provided, and differences in overhead costs. DESC officials also told us there were several factors that limited the usefulness of the comparison between the prices paid by DESC and the prices paid by the contractor for fuel and fuel delivery from Kuwait, such as the fact that DESC could commit to longer contracts with the Kuwaiti subcontractor and the fact that by contracting with the same subcontractor, DESC could use the fuel transportation infrastructure established under the prior contract (i.e., the start-up costs faced by DESC were lower). In total, $112 million of the questioned costs were removed from the amount used to establish the contractor’s fee pool. The contractor’s fixed and award fees were calculated as a percentage of the costs included in the fee pool. Consequently, removing $112 million from the amount used to establish the fee pool resulted in an effective lowering of the fees the contractor received by about $5.8 million (see table 1 for details). DCAA officials said they believed the DOD contracting officer followed the standard process for addressing questioned costs. For example, the Director of DCAA testified before Congress that the process worked as it is defined, and that in making its decision of how to address the costs, the Corps “rightly considered other evidence other than the audit reports and considered extenuating circumstances that might have affected the contractor’s actions.” When asked if he was satisfied with the resolution of the questioned costs, a DCAA official involved in the process told us he thought the DOD contracting officer did the best job he could, given the circumstances. All 10 RIO I task orders were negotiated more than 180 days after the work commenced, and all were negotiated after the work had been completed. The RIO I task orders were considered undefinitized contracting actions because DOD and the contractor had not reached agreement on the terms, specifications, and price of the task orders before performance began. Undefinitized contract actions are used when government interests demand that the contractor be given a binding commitment so that work can begin immediately, and negotiating a definitive contract is not possible in time to meet the requirement. DOD requires that contract actions be definitized within 180 days after issuance of the action or when the amount of funds obligated under the action is over 50 percent of the not- to-exceed price, whichever occurs first. The head of an agency may waive these limitations in certain circumstances that likely would have applied for this contract, including for a contingency operation, but Corps officials told us that waivers were not requested for these task orders. Figure 2 shows the time it took for DOD and the contractor to reach agreement on the terms and conditions for the task orders. Because of the delays in negotiations, virtually all of the costs had been incurred by the contractor at the time of negotiations. The contracting officer determined that the questioned costs he decided to pay were reasonable and in accordance with the FAR, and his decision to pay nearly all of the questioned costs was influenced by (1) the fact that nearly all of the costs had been incurred at the time of negotiations and (2) his belief that payment of incurred costs was required, absent unusual circumstances. The contracting officer stated in final negotiation documentation that unusual circumstances did not exist for most of the questioned costs. For example, the DOD contracting officer indicated that because DCAA chose not to suspend or disallow the funds, which DCAA can do by issuing a Form 1, unusual circumstances did not exist. Several factors contributed to the delay in negotiations, including DOD’s changing requirements, DOD’s funding challenges, and inadequacies in several of the contractor’s business systems. Based on contract documentation as well as interviews with DOD officials and contractor representatives, these factors made it difficult for the contractor to submit proposals in a timely fashion. Without a qualifying contractor proposal, the government and the contractor are not able to reach agreement on the terms and conditions of a task order. For many of the task orders, the contractor did not submit qualifying proposals until late in the period of performance or after the work had been completed. For example, for 6 of the 10 task orders, the contractor did not submit a qualifying proposal that was audited by DCAA until after the period of performance was complete. Corps officials told us that changing requirements made it difficult for the contractor to submit a proposal. In particular, the requirements for the fuel mission were not well defined and changed over time, particularly in terms of the quantity of fuel needed and the period of performance for the work. According to Corps officials, the fuel mission was initially envisioned as a 21-day requirement, but ultimately extended into many months. The extension of the requirements is reflected in modifications to task order 5, the initial fuel mission task order, where the period of performance was extended. Additionally, numerous correspondences between DOD officials demonstrate the uncertainty as to how much fuel was required and the time frame during which fuel importation would be needed. For example, one correspondence indicates that as of April 21, 2003, there was no immediate need for the importation of fuel products because Iraq was able to provide sufficient refined products to satisfy the domestic need, and one DOD official considered it unlikely that the need would arise. Less than 2 weeks later, on May 2, 2003, DOD correspondence indicates that fuel shortages were anticipated, and DOD officials began preparations to execute the fuel importation mission. At that time, officials anticipated the need for 10- to 30-day supplies of fuel, not a mission that would expand into many months. In addition, the statements of work for the fuel mission did not outline the quantities needed to fulfill the mission. The quantities of fuel required changed numerous times. For example, between July 16, 2003, and August 3, 2003, the Corps issued four separate letters to the contractor, each one increasing the quantities of fuel required to fulfill the mission. Overall, through numerous modifications, the Corps increased the funding on task order 5 from $24 million to $871 million, a value more than 36 times greater than the initial allocation. The Corps also experienced challenges in establishing and maintaining a consistent, reliable, and sufficient source of funding for the RIO I contract, which exacerbated the problem of fully defining the requirements. The RIO I task orders were funded using several sources, including the Army’s Operation and Maintenance Appropriation, Iraqi vested assets, and the Development Fund for Iraq. For the fuel mission, a high-level Corps official involved in the funding aspect of the contract told us that the Corps had a difficult time finding enough funding to support the mission, a fact that contributed to short-term requirements. For example, this official told us that the Corps received funding on a short-term basis rather than the longer-term funding it requested, which affected the quantity of fuel the Corps could direct the contractor to purchase. Additionally, to support the fuel mission when funding was tight, the Corps began using funds from the infrastructure repair and restoration task orders to fund the fuel mission task orders, resulting in the delay of work the Corps believed was critical to the repair of the oil infrastructure. DOD officials and contractor representatives also told us that the contractor’s business systems were not fully prepared to handle the growth in work the company experienced as a result of the war in Iraq, and this contributed to the delays in proposal submission. From 2002 to 2004, the contractor’s revenues grew from $5.7 billion to $11.9 billion. Subsequent to the issuance of the RIO I contract, and after the war in Iraq began, DCAA identified deficiencies in several of the contractor’s business systems. For example, DCAA considered the contractor’s estimating system—a system important for proposal development—adequate prior to the issuance of the RIO I contract. However, subsequent to the issuance of the RIO I contract, DCAA issued an audit that found the contractor’s estimating system to be inadequate for providing verifiable, supportable, and documented cost estimates that are acceptable for negotiating a fair and reasonable price. We have shown through our previous work the link between delays in definitization and challenges with requirements, funding, and proposal submission. For example, in a review of 77 undefinitized contract actions issued by various DOD agencies, we found that contracting officers cited timeliness of a qualifying proposal, changing or complex requirements, and changes in funding availability as three of the top four reasons for delays in definitization. In a previous review of Iraq reconstruction contracts, agency officials told us that delays in reaching agreement on the terms and conditions of a contract resulted from the growth in requirements and from concerns over the adequacy of contractor proposals. Delays in definitization can increase the risk to the government because when contracts remain undefinitized, the government bears most of the risk. For example, in a prior review of how DOD addressed DCAA’s audit findings on 18 audits of Iraq contract actions, we found that DOD contracting officials were less likely to remove questioned costs from a contract proposal if the contractor had incurred these costs before reaching agreement on the work’s scope and price. In a previous review of Iraq reconstruction contracts, as well as a review of DOD’s logistics support contracts, we found that delays in definitizing contract actions can increase the risk to the government by reducing cost control incentives, particularly for cost reimbursement type contracts like the RIO I contract. In total, DCAA considers $26 million of the costs questioned on the RIO I contract to be sustained. DCAA defines questioned costs sustained as the negotiated cost reductions directly attributable to questioned cost findings reported by the DCAA auditor. DCAA’s calculation of questioned costs sustained includes costs DOD decided not to pay to the contractor and other types of cost reductions. Specifically, the $26 million of questioned costs sustained includes (1) $9 million composed primarily of costs DOD decided not to pay to the contractor and including some costs DOD decided to pay but moved from one task order to another because of improper allocation and (2) $17 million in costs removed from the contractor’s final proposals but questioned by DCAA in prior audits of previous contractor proposals. For example, in an early version of a proposal for one of the fuel mission task orders, the contractor proposed demobilization costs that DCAA questioned. The contractor removed these costs in a subsequent proposal, and DCAA considered the removal of these costs attributable to its audit findings, and therefore counted that amount as sustained. For purposes of calculating a sustention rate, which is a calculation of questioned costs sustained divided by questioned costs, in its internal management system DCAA increased its questioned costs on the final audits from $221 million to $237 million to reflect the $17 million sustained from prior audits. Table 2 shows the questioned costs sustained and the sustention rate for each of the audits of the RIO I contract. The sustention rates ranged from 0 to 20 percent for the fuel mission task orders, which represented a large portion of the questioned costs. As discussed earlier in the report, the DOD contracting officer collected additional information and conducted additional analysis to address some of these questioned costs. Additionally, he identified limitations to the comparison between the prices paid by the contractor and DESC used by DCAA to question some of the fuel costs, such as differences in the length of contract terms for purchase of fuel and fuel delivery from Kuwait, and referred to these limitations in his rationale for his decision on these costs. We compared the sustention rates for the 11 RIO I audit reports to the sustention rates for 100 DCAA audits of other Iraq contract actions, and found a similar pattern in the distribution and range of sustention rates for both groups. Specifically, as shown in figure 3, the sustention rates for both groups of audit reports varied widely. DCAA officials told us that it is not unusual to have a sustention rate of 0 percent or of 100 percent on an individual audit, and these were common values in the two groups we looked at. DCAA officials told us that they do not expect every questioned cost to be sustained, because reasonable people can disagree about how some of these costs should be resolved. Additionally, as discussed earlier, contracting officers may consider other information provided subsequent to DCAA’s issued audit as part of the process of resolving DCAA audit findings. DOD paid approximately $57 million in award fees, or 52 percent of the maximum possible award fee, for the RIO I contract. However, DOD missed potential opportunities to motivate contractor performance by not following steps outlined in its award fee plan to provide performance feedback to the contractor. Further, DOD was unable to provide sufficient documentation to enable us to fully evaluate its adherence to its award fee plan. In comparing the RIO I award fee to award fees earned on other selected Iraq reconstruction contracts, we found that the percentage of award fee earned on the RIO I contract fell within the lower range of fees earned on these other contracts. The overall award fee paid to the contractor on the RIO I contract totaled about $57 million, just over half of the maximum possible award fee. The contract provided for a fixed fee of 2 percent of the negotiated estimated contract cost and an award fee of up to an additional 5 percent that could be earned based on the government’s evaluation of the contractor’s performance in areas including technical and cost performance and business management. The possible 5 percent award fee was based on a negotiated estimated contract cost of about $2.2 billion, translating into a maximum award fee of about $109 million. The award fee decision states that, overall, while the quality of the contractor’s work was generally rated highly, the contractor did not do as well in the areas of adherence to schedule and business management. As shown in table 3, the award fee varied by task order, ranging from 4 percent to 72 percent of the possible award fee. DOD’s award fee plan for the RIO I contract included several steps related to providing the contractor with ongoing performance feedback. For example, the plan called for award fee evaluations to be conducted on a regular basis during the period of performance. These evaluations were to include a meeting of the award fee board to determine a recommended award fee for the contractor and a final decision by the award fee determining official. After each award fee evaluation, the contractor was to be notified of the percentage and amount of award fee earned. In addition to these formal award fee evaluations, the plan also called for monthly interim evaluations to be conducted in which award fee board members would consider performance evaluation reports submitted by DOD staff designated as performance monitors, reach an interim evaluation decision, and then notify the contractor of the strengths and weaknesses for the evaluation period. However, despite its plans to conduct formal award fee evaluations during the period of performance, DOD did not convene an award fee board for the RIO I contract until contract performance was almost entirely completed. DOD officials told us that they were unable to hold boards due to the heavy workload of RIO staff and logistical challenges such as difficulties with communications, travel, and security conditions. DOD officials and contractor representatives also indicated that holding an award fee board was not a high priority because their focus was on making sure that the work under the contract was accomplished. Ultimately only one award fee board was held, in July 2004, after fieldwork on all but one task order had been completed. The contractor was notified of its award fee scores in January 2005, after completion of all work on the contract. This process was in contrast to the rationale for award fee evaluations explained in federal regulations: Evaluation at stated intervals during performance, accompanied by partial payment of the fee generally corresponding to the evaluation periods, can induce the contractor to improve poor performance or to continue good performance. In addition to not holding formal evaluations as planned during the period of performance, DOD did not meet the rigor called for in the award fee plan when providing interim performance feedback to the contractor. DOD did provide some interim feedback to the contractor on its performance during the period of performance. For example, DOD officials and contractor representatives told us that DOD contracting staff and contractor staff had daily informal discussions about contractor performance. In addition, RIO administrative contracting officers sent the contractor letters on a semiannual basis that provided feedback on the contractor’s performance. However, as discussed previously, the award fee plan states that the award fee board should hold monthly interim evaluations of the contractor’s performance and provide the contractor with feedback from the evaluations. DOD officials were only able to provide us with information about one interim evaluation board, and the contractor was not provided with results from this evaluation. Contracting staff and others providing feedback to the contractor expressed to us views ranging from very negative to very positive on the contractor’s performance during the same time period. Thus, without feedback reflecting consensus judgment, the contractor may not have been fully aware of the government’s views on the strengths and weaknesses of its performance. Given that the award fee is intended to motivate excellence in contractor performance, providing the contractor with this type of feedback is an important step in achieving this aim. The lack of adherence to the award fee plan also made it difficult to ensure that all aspects of the contractor’s performance were considered in the final award fee decision. Although performance monitors were supposed to complete reports monthly and at the end of each evaluation period in order to provide the award fee board with information about the contractor’s performance, which would mean that hundreds of reports should have been completed during the course of the contract, a DOD official told us that fewer than 10 performance monitor reports were ever provided to the award fee board. The board received so few reports because (1) written reports were not prepared on a regular basis, as required by the award fee plan, and (2) reports that were prepared were not submitted to the award fee board. Specifically, DOD officials and correspondence indicated that performance monitor reports did not begin to be completed until several months into the contract period of performance and even then were not completed on a monthly basis. In addition, DOD officials provided us with more than 25 reports that they told us had been completed but not provided to the award fee board members. DOD officials told us that board members were not provided with these documents because the Corps had received a large number of documents related to the RIO I contract from Iraq that had not been sorted through by the time the award fee board was held in July 2004. Because the DOD officials had not sorted through all of the documents, the award fee board was also not provided with full information about an interim evaluation board held in May 2003. Specifically, award fee board members were provided with only one task order score from the interim evaluation board, despite the fact that documentation of consensus scores and contractor strengths and weaknesses was prepared for four task orders. DOD officials responsible for selecting the award fee board members told us that they selected board members to ensure that they included individuals who had directly observed the contractor in different time periods and locations. However, because the award fee board meeting was near the end of fieldwork on the contract and because RIO staff rotated during the period of performance, written observations of contractor performance would have been important in ensuring that the board had full knowledge of all aspects of the contractor’s performance. We have previously reported on problems with DOD adhering to its award fee process in contingency situations. In our review of DOD’s use of logistics support contracts, for one large contract we found that the Army was not holding award fee boards according to the terms of the contract. We also found that Army officials were not evaluating and documenting the contractor’s performance on that contract. To evaluate the extent to which DOD followed its planned process for making the RIO I award fee decision, we attempted to review DOD’s adherence to the process outlined in its award fee plan, but were not able to fully do so because DOD could not provide us with documentation of some elements of the process. For example, according to DOD officials and the award fee board minutes, the board determined its recommended score for each task order by first reaching a consensus on individual criteria outlined in the contract, and then computing the overall score based on the weighting included in the contract for those criteria. However, according to DOD officials, they could not provide us with the consensus scores on the individual criteria because records of those scores were destroyed after the final award fee decision was reached. Without these scores, we could not determine whether the award fee board adhered to the weighting of the criteria outlined in the contract in reaching its recommendation. We also had limited insight into any additional factors the award fee determining official considered in making his initial decision, which included upward adjustments to the award fee board’s recommendation, because DOD officials could not provide us with documentation of the reasons for the difference and told us they did not believe such documentation had ever been developed. This apparent lack of documentation was not in accordance with the award fee plan, which states that reasons for any differences between the award fee determining official’s decision and the award fee board’s recommendation must be fully documented. DOD officials also could not provide us with complete information regarding the monitoring of the contractor’s performance during the period of performance. For example, we could not obtain full documentation of interim boards referred to in the award fee board minutes, including documentation of the number of boards held, the dates of the boards, or the results from the boards. Without such information, we could not determine how results from interim evaluations were figured into the award fee board’s recommendation, as the award fee plan indicates they should be. To put the RIO I award fee into context, we also analyzed the award fees earned on other selected Iraq reconstruction contracts and found that the percentage of award fee earned on the RIO I contract was within the range of award fees earned on these other contracts. More specifically, we reviewed 11 contracts that DOD awarded in 2004 to conduct reconstruction activities in Iraq, which, like the RIO I contract, were large- scale cost-plus-award-fee contracts. During the period we looked at, January 2004 through June 2006, a total of 37 award fee evaluation periods were conducted for the 11 contracts. As illustrated in figure 4, the percentage of award fee earned during the period varied by contract, ranging from 20 percent to nearly 100 percent. To meet the urgent operational needs of reestablishing Iraq’s oil infrastructure and importing fuel, the Corps authorized the contractor to begin work before task orders had been definitized. Factors such as changing requirements, funding challenges, and problems with contractor proposals delayed negotiations until well past the timing required by DOD for definitization. For all 10 RIO I task orders, the work was completed before negotiations were finalized. Delays in definitizing contract actions can increase the risk to the government by reducing cost control incentives, particularly for cost reimbursement type contracts. In addition, our findings on the agreement reached between DOD and the contractor on the RIO I contract build on other significant evidence in our prior work that the value of DCAA’s audits of contractor proposals is limited when negotiations take place too long after work has begun. Award fees can serve as a valuable tool to help control program risk and encourage excellence in contract performance. To reap the advantages that cost-plus-award-fee contracts offer, the government must implement an effective award fee process, which requires additional administrative effort and cost to monitor and evaluate performance. The FAR requires that the expected benefits of using a cost-plus-award-fee contract are sufficient to warrant this additional effort and cost, but in the case of the RIO I contract, even if this condition had been met, DOD’s Army Corps of Engineers did not carry out its planned award fee process. According to DOD officials, efforts to hold award fee boards during the period of performance were stymied in part by the logistical conditions in Iraq. We have previously identified problems with DOD’s award fee process in contingency environments. Given that the award fee is intended to motivate excellence in contractor performance, providing the contractor with regular feedback that reflects the consensus of the government about its strengths and weaknesses is important to enable the contractor to put forth its best effort to excel in the areas deemed important to the government. While contingency situations may pose additional challenges for adhering to an award fee process, without an effective process, the government risks incurring the additional cost and administrative effort of an award fee contract without receiving the expected benefits. To ensure that cost-plus-award-fee contracts provide the intended benefits, we recommend that the Secretary of the Army take the following action: In contingency situations, as a part of weighing the costs and benefits of using a cost-plus-award-fee contract, ensure that an analysis of the administrative feasibility of following a rigorous award fee process is conducted before the contract is awarded. We provided a draft of this report to DOD for comment. In written comments, DOD concurred with our recommendation. The department’s comments are reproduced in appendix II. In concurring with the recommendation, DOD noted a number of factors that exist in this contingency operation that it believed demonstrated the difficulty of conducting an analysis of the administrative feasibility of using an award fee contract in future contingency situations. These factors included urgent contracting time frames, uncertain requirements, and difficulties in identifying appropriate oversight personnel. As specified in federal regulations, the use of an award fee contract is suitable when the expected benefits of such a contract are sufficient to warrant the additional effort and cost required to monitor and evaluate contractor performance. It is precisely factors such as those outlined by DOD that we believe are important for consideration when determining the administrative feasibility of a cost-plus-award fee contract in a contingency environment. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Defense and other interested parties. We will make copies of this report available on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report were Marie Ahearn, Penny Berrier Augustine, Greg Campbell, Arthur James Jr., Eric Lesonsky, Stephen Lord, Anne McDonough-Hughes, Janet McKelvey, and Kenneth Patton. To determine how the Department of Defense (DOD) addressed the Defense Contract Audit Agency’s (DCAA) audit findings on the Restore Iraqi Oil (RIO I) contract and the factors that contributed to DOD’s decision of how to address those findings, we reviewed negotiation memorandums and 22 DCAA audit reports, including 11 final audit reports, for the 10 RIO I task orders. Additionally, we reviewed other documents related to the negotiation process and resolution of DCAA’s findings. We also interviewed Corps, DCAA, and other government officials as well as contractor representatives. Because a contracting officer has the discretion to determine whether or not to pay questioned costs when reaching agreement with a contractor, our review does not include a determination of whether the DOD contracting officer should have approved payment for the questioned costs. Additionally, to put DOD’s decisions of how to address DCAA’s RIO I contract audit findings into context, we compared the resolution of DCAA’s questioned cost findings on 100 audits of other Iraq-related contract actions to the resolution of the questioned cost findings on the RIO I task order audits. We selected the 100 audits for comparison because they represented all audits of Iraq- related contract actions other than the RIO I contract for which DCAA had calculated the questioned costs sustained as of the end of fiscal year 2006, excluding those calculated automatically. To ensure we used a consistent unit of measurement, we used the audit report as the unit of analysis for comparison. To develop an understanding and assess the reliability of the information included in the database that contained the results for these 100 audits, we held discussions with and obtained documentation from DCAA officials located at Fort Belvoir and we conducted electronic and manual testing for obvious inconsistencies and completeness. We determined the data used in our review to be sufficiently reliable for our purposes. To determine the extent to which DOD paid award fees for the RIO I contract and followed its planned process for making that decision, we collected and reviewed key documents related to the award fee process, including the award fee provisions of the RIO I contract, the award fee determining official’s decision, the award fee plan, and minutes from the award fee board meeting. We also interviewed Corps officials, including the award fee determining official and members of the award fee board, to develop an understanding of the process and outcome for the award fees, and contractor representatives to obtain their perspective on award fees. Additionally, to put the award fee for the RIO I contract into context, we gathered and analyzed award fee documentation provided by the Joint Contracting Command-Iraq/Afghanistan for 11 contracts that DOD awarded in 2004 to conduct reconstruction activities in Iraq. We selected these contracts because, like the RIO I contract, they were large-scale, cost-plus-award-fee contracts. During the period we looked at, January 2004 through June 2006, a total of 37 award fee evaluation periods were conducted for the 11 contracts. In 11 of the 37 award fee evaluation periods we analyzed, the award fee determining official chose to roll over the unearned award fee from one evaluation period to a subsequent evaluation period or periods. In these cases we excluded rolled-over fees from the available fee pool. Because an award fee determination is a unilateral decision made solely at the discretion of the government based upon judgmental evaluations of contractor performance, our review does not include an assessment of whether DOD reached the appropriate award fee decision for the RIO I contract. We conducted our work from October 2006 through July 2007 in accordance with generally accepted government auditing standards.
The Department of Defense's (DOD) U.S. Army Corps of Engineers (Corps) awarded the $2.5 billion Restore Iraqi Oil (RIO I) contract to Kellogg Brown & Root in March 2003 in an effort to reestablish Iraq's oil infrastructure. The contract was also used to ensure adequate fuel supplies inside Iraq. RIO I was a cost-plus-award-fee type contract that provided for payment of the contractor's costs, a fixed fee determined at inception of the contract, and a potential award fee. The Defense Contract Audit Agency (DCAA) reviewed the 10 RIO I task orders and questioned $221 million in contractor costs. We were asked to determine (1) how DOD addressed DCAA's RIO I audit findings and what factors contributed to DOD's decision and (2) the extent to which DOD paid award fees for RIO I and followed the planned process for making that decision. To accomplish this, we reviewed DOD and DCAA documents related to RIO I and interviewed Corps, DCAA, and other officials. DOD considered DCAA's audit findings on the RIO I contract and performed additional analysis before deciding to pay the contractor nearly all of the $221 million in costs that DCAA questioned. DOD did, however, remove about $112 million of the questioned costs from the amount used to establish the contractor's fee pool, which resulted in an effective lowering of the fee received by the contractor by approximately $5.8 million. Lack of timely negotiations contributed significantly to DOD's decision on how to address the questioned costs--all 10 task orders were negotiated more than 180 days after the work commenced. As a result, the contractor had incurred almost all its costs at the time of negotiations, which influenced DOD's decision to pay nearly all of the questioned costs. The negotiation delays were in part caused by changing requirements, funding challenges, and inadequate contractor proposals. In our previous work, we have found that negotiation delays can increase risk to the government. Overall, DCAA considers $26 million of the costs questioned on the RIO I contract to be sustained, which DCAA defines as cost reductions attributable to its audit findings. We compared the sustention rates on DCAA's 11 RIO I contract audits to the sustention rates for 100 DCAA audits of other Iraq contract actions, and found that the sustention rates varied widely for both groups. DOD's Army Corps of Engineers paid $57 million in award fees on the RIO I contract, or 52 percent of the maximum possible, and on individual task orders the fee awarded ranged from 4 to 72 percent of the fee available. While the award fee plan required regular award fee boards during the life of the contract, DOD did not conduct a formal board until nearly all work on the contract was complete. As a result, DOD was not able to provide the contractor with formal award fee feedback while work was ongoing, which federal regulations state should be done in order to motivate a contractor to either improve poor performance or continue good performance. DOD officials told us the workload of RIO staff members and logistical difficulties stemming from the challenging conditions in Iraq hindered efforts to hold evaluation boards during the period of performance. DOD also was unable to give us enough documentation for a full assessment of its compliance with other parts of its plan--it did not, for example, provide the scores the award fee board assigned to the contractor on the individual award fee criteria, so we could not see if the award fee board had followed contract criteria and weighting in evaluating performance. We compared the percentage of award fees earned on the RIO I contract to the fees earned on a group of other selected Iraq reconstruction contracts and found that the percentage of award fees earned on RIO I fell within the lower range of fees earned on the other contracts.
DOE’s December 2008 contract with SRR to empty, clean, and close the Savannah River Site’s underground tanks is a cost plus award fee contract. Under this type of contract, SRR’s costs to conduct cleanup work are reimbursed by DOE. Such costs include, among other things, workers’ salaries and fringe benefits such as employer-provided health insurance and defined-benefit pension plans. In addition, to encourage innovative, efficient, and effective performance, this type of contract gives SRR the opportunity to earn a monetary incentive known as an award fee. The amount of award fee SRR is able to earn is determined by its accomplishment of goals mutually agreed upon by the contractor and DOE. The contractor’s cost, schedule, performance, and scope commitments for successfully delivering the contract-defined requirements are specified in a document known as the contract performance baseline that is developed by the contractor and agreed to by DOE. Many site activities not related to tank closure—such as management of spent nuclear fuel and soil and groundwater cleanup—are conducted under a separate management and operations contract currently held by Savannah River Nuclear Solutions, LLC. DOE Order 413.3A establishes a process for managing the department’s major projects—including contractor-run projects that build large complexes that often house unique equipment and technologies such as those that process waste or other radioactive material and environmental cleanup projects. The order covers activities from identification of need through project completion. Specifically, the order establishes five major milestones—or critical decision points—that span the life of a project. Order 413.3A specifies the requirements that must be met, along with the documentation necessary, to move a project past each milestone. In addition, the order requires that DOE senior management review the supporting documentation and approve the project at each milestone. DOE also provides suggested approaches for meeting the requirements contained in Order 413.3A through additional guidance. For years, DOE has had difficulty managing its contractor-run projects. Despite repeated recommendations from us and others to improve project management, DOE continues to struggle to keep its projects within their cost, scope, and schedule estimates. For example, we reported in September 2008 that 9 of 10 major cleanup projects managed by DOE’s Office of Environmental Management—which manages cleanup projects such as tank closure at the Savannah River Site—had experienced cost increases, and that DOE had estimated that it needed an additional $25 billion to $42 billion more than the projects’ initial cost estimates to complete these projects. Because of DOE’s history of inadequate management and oversight of its contractors, we have included contract and project management in DOE’s National Nuclear Security Administration and Office of Environmental Management on our list of government programs at high risk for fraud, waste, abuse, and mismanagement since 1990. In response to its continued presence on our high-risk list, DOE analyzed the root causes of its contract and project management problems in 2007 and identified several major findings. Specifically, DOE found that the department: often does not complete front-end planning to an appropriate level before establishing project performance baselines; does not objectively identify, assess, communicate, and manage risks through all phases of project planning and execution; fails to request and obtain full project funding; does not ensure that its project management requirements are consistently often awards contracts for projects prior to the development of an adequate independent government cost estimate. To address these issues and improve its project and contract management, DOE has prepared a corrective action plan with various corrective measures to track its progress. Among the measures being implemented are for DOE to make greater use of third-party reviews prior to project approval, establish objective and uniform methods of managing project risks, better align cost estimates with anticipated budgets, and establish a federal independent government cost-estimating capability. Emptying, cleaning, and closing the 22 tanks without secondary containment involves a number of steps. The radioactive waste generally comes in a variety of physical forms and layers inside the tanks, depending on the physical and chemical properties of the waste components. The waste in the tanks take the following three main forms which are illustrated in figure 1: Sludge. The denser, water-insoluble components of the waste generally settle to the bottom of the tank to form a thick layer known as sludge, which has the consistency of peanut butter. Although sludge is only 8 percent of the total volume of the tank waste at the Savannah River Site, it has about 49 percent of the tanks’ total radioactivity. Saltcake. Above the sludge may be water-soluble components, such as sodium salts, that crystallize or solidify out of the waste solution to form a moist sandlike material called saltcake. Salt supernate. Above or between the denser layers may be liquids comprised of water and dissolved salts that are called supernate. Most of the waste in a tank is removed by using pumps and high-pressure wash systems. Various methods are then used to immobilize the waste and prepare it for permanent disposal. In the case of sludge, the material is immobilized through vitrification—a process that stabilizes waste by mixing it with molten glass and then pouring it into large metal canisters where it hardens—at the Savannah River Site’s Defense Waste Processing Facility (DWPF), which has operated since March 1996. Canisters produced by DWPF are currently stored on site pending the availability of a geologic repository where they will be permanently disposed. DOE’s original plans were to locate a permanent geologic repository for these canisters, as well as other nuclear waste generated across the United States, at Yucca Mountain in Nevada. The department had submitted a license application for authorization to construct a repository at Yucca Mountain to the U.S. Nuclear Regulatory Commission. However, DOE moved to withdraw the license application in March 2010 and has declared its intention not to proceed with the Yucca Mountain project. While the U.S. Nuclear Regulatory Commission’s Atomic Safety and Licensing Board denied DOE’s motion to withdraw the application in June 2010, the final permanent disposal location for vitrified high-level waste at the Savannah River Site, as well as hundreds of thousands of tons of additional radioactive waste across the country, remains in question. In the case of the larger volumes of saltcake and salt supernate (known collectively as salt waste) that are stored at the Savannah River Site, glass vitrification of all of this waste without reducing its volume would produce a very large number of metal canisters that would need to be permanently disposed. DOE is using several interim processes to separate higher radioactivity waste from the remainder of the lower activity waste and, consequently, reduce the number of canisters requiring disposal that will be generated. One of these interim processes—the Actinide Removal Process/Modular Caustic-Side Solvent Extraction Unit (ARP/MCU)— began operations in May 2008 with a 3-year operational expectancy. DOE is also constructing permanent facilities at the Savannah River Site to separate the higher activity waste from the remainder of the lower activity waste. A key facility, the Salt Waste Processing Facility (SWPF), uses the same technology as ARP/MCU, but on a larger scale. SWPF is currently being constructed by Parsons Corporation under a separate contract with DOE. Although estimated to cost more than $1.3 billion, we reported in January 2010 that DOE’s cost estimate for SWPF only somewhat or partially met the four characteristics of high-quality cost estimates— accuracy, comprehensiveness, credibility, and well documented. DOE expects SWPF to begin separating higher- and lower-radioactivity waste sometime between fiscal year 2013 and the beginning of fiscal year 2016. Once separated, higher-radioactivity waste will then be mixed with sludge for vitrification at DWPF. The low-radioactivity waste that is currently separated out by ARP/MCU and is to be separated out by SWPF is stabilized by combining it with a grout-like substance at another Savannah River Site facility called the Saltstone Facility, where it will be permanently disposed of in a series of on-site vaults. Removal and treatment of liquid radioactive waste from the tanks do not, however, complete the tank closure process. Any residual radioactive waste that pumping and high-pressure washing cannot remove from the tank surfaces must be mechanically scrubbed and may also be treated with chemicals for removal. This cleaning process generates additional radioactive waste that must also be removed from the tanks and eventually treated for permanent disposal. Even with chemical cleaning, it is impossible with current technology to remove 100 percent of the radioactive and hazardous waste from every tank. A small quantity of waste will remain in the tank. Following the removal of most of the waste and chemical cleaning, DOE must demonstrate that the department has cleaned the tank to the maximum extent practicable. DOE, EPA, and South Carolina must agree upon the concentration of wastes that are allowed to remain in the tanks and the criteria for permanently closing them. The current plan calls for DOE to permanently close the tanks by filling the now-substantially empty tanks with a cementlike substance to prevent their collapse and the release of any residual radioactive or hazardous material into the environment. Emptying, cleaning, and permanently closing the 22 underground liquid radioactive waste tanks at the Savannah River Site that lack secondary containment is likely to cost significantly more and take longer than estimated in the December 2008 contract between DOE and SRR. Specifically, SRR notified DOE in June 2009 that the total cost to close the 22 tanks had increased by slightly more than $1.4 billion from $3.2 billion as estimated in the December 2008 contract to about $4.6 billion. In addition, closing the tanks may take longer than originally estimated because of persistent delays constructing SWPF—a facility vital to successful tank closure because it will treat a large portion of the waste removed from the tanks. Our review also found that the SWPF construction schedule does not meet GAO-identified best scheduling practices. Although DOE is exploring ways to mitigate the effects of SWPF construction delays by deploying new technologies to treat additional quantities of waste, DOE officials told us that additional research and development on these technologies is still required and that it would be several years before these new technologies could be deployed. One day before beginning work under the contract it signed with DOE in December 2008, SRR reported that the estimated cost to empty, clean, and permanently close the 22 tanks had increased by slightly more than $1.4 billion. The estimated cost increase was discovered during a due-diligence review SRR conducted during the transition period from the previous contractor managing liquid high-level radioactive waste operations at the Savannah River Site. The purpose of this review was to identify, among other things, any physical site conditions that were different than those portrayed in DOE’s September 2007 request for proposals—which formed the basis of SRR’s proposal and the December 2008 contract—or that could give rise to other liabilities or noncompliance with the contract. In a June 30, 2009, letter—one day prior to the end of the contract transition period—SRR reported to DOE that its review had identified more than 300 differences in such conditions, 22 of which SRR considered to be material. Material differences are a change of conditions that will have a significant impact, positive or negative, on the performance of work in terms of time or costs, and impacts to the contract milestones, among other things. SRR’s June 2009 letter stated that these 22 material differences would result in a contract cost increase from roughly $3.2 billion to about $4.6 billion—a 44 percent increase. Our review indicates that much of this increase is because the cost estimate in DOE’s 2007 request for proposals that formed the basis of the December 2008 contract was not accurate or comprehensive. For example DOE underestimated fringe benefit rates by 27 to 62 percent depending upon an employee’s job classification, and underestimated labor rates by 5 to 70 percent for certain job classifications. DOE’s cost estimate was based on historical data that underestimated future costs. As a result, SRR reported that costs would increase by $279 million. DOE assumed in the September 2007 request for proposals that certain costs—including retiree health care and essential site services such as computer and telecommunications equipment and water service—would be paid under the Savannah River Site’s management and operations contract rather than the SRR contract. Subsequently, DOE reversed its decision and instead assigned these costs to the SRR contract. Although this action resulted in no net increase in costs to the taxpayer because these costs will be subtracted from the Savannah River Site’s management and operations contract, it resulted in a $270 million increase in the costs associated with the tank closure contract. In addition, DOE did not account for the more than $600 million in pension costs that were needed to make up for significant losses suffered by the Savannah River Site workers’ defined-benefit pension plans as a result of the economic crisis that began in 2007. DOE contractors generally provide their employees with pension plans, health care benefit plans, and other postretirement benefits. DOE reimburses these contractors for the costs of providing pension and postretirement benefits to current and former employees and their beneficiaries. DOE is ultimately responsible for reimbursing its contractors for the allowable costs of these plans. DOE’s September 2007 request for proposals estimated that funding the defined- benefit pension plans for current and retired Savannah River Site workers and their beneficiaries covered under the contract would cost $146 million. However, the economic crisis that began in 2007 caused significant losses to the assets in which the Savannah River Site workers’ pension plans had invested. This, combined with other factors, caused DOE to face a significant shortfall in the amount of pension funding originally estimated in the 2007 request for proposals versus what is now estimated to be required. Despite having 17 months between the start of the economic crisis and signing the contract with SRR, DOE did not update the September 2007 pension cost estimate because, according to DOE officials, the amount of the shortfall was still fluctuating. In its June 2009 letter to DOE, SRR estimated that pension costs had increased by more than $600 million, from $146 million to $762 million. DOE’s difficulty producing an accurate and comprehensive cost estimate to empty, clean, and permanently close the 22 tanks is consistent with the department’s own findings in its April 2008 root cause analysis of its contract and project management problems. Similarly, we reported in January 2010 that DOE’s inability to produce high-quality cost estimates limits the department’s ability to effectively manage its projects and provide good estimates to Congress of the amount of money needed to complete projects and recommended that the department update its cost estimating guidance to address these concerns. DOE took no action in response to SRR’s June 2009 letter reporting the 22 material differences and $1.4 billion cost increase. Lacking a response from DOE, SRR prepared and, in September 2009, submitted a contract performance baseline to DOE that included these additional costs. SRR and DOE officials told us that DOE did not inform SRR of department guidance that stated that a contractor should not be allowed to change estimated contract costs by simply including a higher cost in the contract performance baseline. As a result, SRR received no information on DOE’s assessment of the cost increases proposed in June 2009 until DOE rejected SRR’s September 2009 contract performance baseline in November 2009. DOE rejected the baseline for reasons including that the department needed additional cost and scheduling documentation to validate SRR’s cost and schedule estimates. Since rejecting SRR’s contract performance baseline in November 2009, DOE and SRR officials have discussed SRR’s proposed cost increases as part of revising the contract performance baseline. Despite about 7 months of discussions, the revised contract performance baseline SRR submitted on June 30, 2010, contained a cost increase of slightly less than $1.4 billion—only $50 million less than the June 2009 cost increase. Therefore, the current estimated cost to close the 22 Savannah River Site tanks is about $4.6 billion—a 44 percent increase from the roughly $3.2 billion in the December 2008 contract. DOE approved SRR’s proposed cost increases and its revised contract performance baseline in August 2010, more than a year after SRR first identified proposed cost increases. DOE’s primary guidance on contract performance baseline development contains limited information detailing the process and time frames by which baselines are to be reviewed and approved. As such, there is no DOE-wide guidance that establishes milestones for reviewing and approving contract performance baselines. Oversight of contractor performance may also be complicated because DOE has exempted many tank closure activities at the Savannah River Site—as well as many other ongoing environmental cleanup projects— from the full requirements of DOE Order 413.3A. In general, Order 413.3A applies to capital asset acquisition projects, including environmental cleanup projects, having a total cost of $20 million or more. Accordingly, DOE’s contract with SRR originally required that the project be managed in accordance with Order 413.3A. In addition, when DOE rejected SRR’s initial contract performance baseline, the department found multiple instances in which SRR had not fully satisfied project management provisions contained in Order 413.3A. However, following the completion of the contract with SRR, DOE’s Office of Environmental Management evaluated the scope of its contracts to determine how much of the activity actually constituted capital asset acquisition activity. As a result of this evaluation, DOE determined that some of the activities covered by the contract with SRR included both capital asset projects and operating activities. DOE exempted these operating activities from Order 413.3A. DOE officials explained that Order 413.3A is more focused on managing the process by which the department constructs new facilities rather than the process by which it operates existing facilities, such as to complete environmental cleanup efforts. While DOE issued a contract modification removing references to Order 413.3A for exempted activities, the modification does not specify which DOE project management policies, if any, apply to the exempted SRR activities. We have previously reported that it is critically important that DOE develop and implement a rigorous, disciplined approach for managing projects, because major cleanup projects, such as tank closure activities at the Savannah River Site, take years to complete, and often involve unique challenges and a high degree of complexity. Such an approach includes planning and managing work activities, cost, and schedule to achieve project goals in a stable, controlled manner. Because salt waste makes up more than 90 percent of the volume of liquid radioactive waste at the Savannah River Site, successful construction of the SWPF is vital to DOE’s efforts to empty, clean, and permanently close the site’s underground tanks. However, SWPF has experienced multiple delays since design of the facility began in 2004. Originally estimated to begin operating in 2009, the facility’s startup date has been repeatedly delayed. At the time the contract between DOE and SRR was signed in December 2008, SWPF was expected to begin operations in September 2012. However, DOE subsequently delayed SWPF’s expected startup date to May 2013 at the earliest. DOE also added more than 2 years of contingency time to the SWPF construction schedule, meaning that SWPF operations may start as late as October 2015. If SWPF starts up in May 2013, SRR estimated that 2 fewer tanks could be closed by the end of the contract in 2017 than originally estimated. If SWPF does not begin operations until 2015, SRR estimated a total of 7 fewer tanks would be closed than originally called for in the contract by the contract’s end in 2017, or 15 out of the 22 underground tanks originally agreed to in the contract. In addition, on-time completion of the SWPF may be in question because the facility’s construction schedule does not fully meet GAO-identified best scheduling practices. Using industry-standard scheduling practices, we previously identified nine key practices necessary for developing a reliable schedule. These practices are (1) capturing key activities, (2) sequencing key activities, (3) assigning resources to key activities, (4) establishing the duration of key activities, (5) integrating key activities horizontally and vertically, (6) establishing the critical path for key activities, (7) identifying float time—the time that activities can slip before the delay affects the completion date, (8) performing a risk analysis of the schedule, and (9) updating the schedule using logic and durations to determine dates. We initially assessed SWPF’s construction schedule in March 2010 and found that it did not fully adhere to these best practices. We discussed these findings with DOE and Parsons officials, and DOE made changes to the schedule. Subsequently, we reassessed the schedule in May 2010 and found that SWPF project officials had taken steps to address some of the problems identified in our initial review but that the schedule still had some shortcomings. Specifically, both of our assessments found that the schedule had problems with excess float time between activities. Float that exceeds a year is unrealistic and should be minimized because excess float times usually indicate that the schedule’s activities are not sequenced logically, which reduces confidence that the schedule will be able to meet its completion date. Our March 2010 review found that the schedule had 272 activities with more than 500 days of float time and that two construction activities involving fabrication of piping— usually critical in construction projects—had more than 1,000 days of float time. Our May 2010 assessment found that this problem had become worse, with 433 activities having more than 500 days of float time—an increase of 59 percent. Twenty-two of these activities had more than 1,250 days of float time. Table 1 summarizes the results of our March and May 2010 schedule assessments and appendix II discusses the best practices and our assessments in detail. DOE is exploring ways of mitigating the effects of SWPF delays. For example, although it was originally planned to operate only until 2011, DOE plans to extend the operations of ARP/MCU—one of the interim processes DOE is using to treat salt waste prior to SWPF operation. In addition, DOE is in the early stages of developing new technologies that will allow it to treat additional quantities of salt waste beyond what is treated by ARP/MCU and SWPF; however, department officials told us that these initiatives will likely not be ready for deployment until 2013. Specifically, DOE is conducting research and development on two technologies—called rotary microfiltration and small-column ion exchange—that will treat salt waste directly in the tank, rather than pumping the waste to a separate facility like ARP/MCU or SWPF. DOE estimates that developing and deploying these two technologies will cost $130 million. DOE officials are hopeful that successful deployment of these new technologies will allow SRR to close all 22 tanks in the December 2008 contract by 2017, as agreed. Moreover, a DOE official said that these technologies could allow closure of the remaining 27 tanks that have secondary containment by 2024—4 years earlier than the 2028 goal committed to by DOE. However, DOE faces hurdles to accomplishing these goals using the new technologies. For example, DOE officials told us that, although these technologies are proven, they have never been used to treat liquid radioactive waste at the Savannah River Site and that additional research and development were necessary. DOE officials with whom we spoke identified three primary challenges to closing the liquid radioactive waste tanks at the Savannah River Site—on- time construction and successful operation of SWPF, increasing the amount and speed at which high-level radioactive waste is vitrified at DWPF, and successfully implementing an enhanced chemical cleaning process for the underground tanks. Although these officials also identified steps the department is taking to ensure these challenges are met, several factors raise concerns about whether DOE will be able to resolve them. Moreover, although most experts we spoke with were generally confident of DOE’s ability to successfully overcome these challenges, some of them identified additional concerns about DOE’s ability to successfully close the underground tanks. According to DOE officials, there are three primary challenges to successfully closing the liquid radioactive waste tanks at the Savannah River Site: (1) on-time construction and successful operation of SWPF; (2) increasing the amount and speed at which high-level radioactive waste is vitrified at DWPF; and (3) successfully implementing an enhanced chemical cleaning process for the underground tanks. On-time construction and successful operation of SWPF. As discussed previously, successful construction of SWPF is vital to DOE’s efforts to empty, clean, and permanently close the site’s underground tanks because salt waste makes up more than 90 percent of the waste in the tanks. However, in addition to the construction delays that have already occurred and the potential for additional delays in the future, which was discussed earlier, concerns have been raised about SWPF’s ability, once constructed, to process waste at a high-enough rate to meet tank closure goals. Specifically, a review by DOE’s Office of Cost Analysis that was conducted between September and November 2008 found that ARP/MCU—which is a small-scale version of SWPF and uses essentially the same technology— had only achieved 50 percent of its designed processing rate after about 5 months of operation. The review raised concerns that officials responsible for SWPF may not have planned to fully utilize lessons learned from ARP/MCU operations in the design for SWPF. Because of this, the review found that DOE may be missing opportunities to mitigate SWPF operational risks. However, DOE officials told us that it was not unusual that ARP/MCU had only achieved 50 percent of its processing rate after only 5 months of operation. These officials said that, because the technology represented a first-of-a-kind nuclear operation, they operated ARP/MCU at a deliberately slow pace during startup. Operating ARP/MCU more slowly also allowed them to collect additional information to inform future operations. According to these officials, ARP/MCU may achieve its optimal processing capacity of 2 million gallons of salt waste per year in fiscal year 2011—3 years after beginning operations—and that lessons learned from the ARP/MCU project are being used in SWPF design. DOE officials said that they hope the more than 2 years of contingency time in the SWPF construction schedule will give them time to ensure the facility will operate as planned. In addition, DOE officials told us that they are working to ensure SWPF is properly integrated with other Savannah River Site radioactive waste storage and treatment facilities to reduce the time needed to ramp up to full operational levels once construction is completed. Increasing DWPF throughput. As discussed earlier, DWPF produces large metal canisters filled with vitrified high-radioactivity sludge waste that are currently stored at the Savannah River Site pending the availability of a geologic repository for permanent disposal. To meet the December 2008 contract’s accelerated schedule for emptying, cleaning, and closing the 22 underground tanks by 2017, SRR must increase production at DWPF from approximately 215 canisters annually to about 400 canisters annually— roughly doubling historical production. In addition, SRR plans to increase the concentration of radioactive waste in each canister. To achieve these improvements, SRR plans to install additional equipment and improve the performance of the melter technology that vitrifies the waste. Although DOE and SRR officials told us that they have confidence in each of the individual improvements planned for DWPF, they are less certain whether the improvements as a group will increase overall DWPF performance. In addition, DWPF has not, to date, ever achieved the levels of efficiency and production that DOE and SRR officials have said will be necessary to achieve tank closure goals. For example, although parts of the system were designed to produce more than 400 canisters per year, DWPF has only achieved an average of about 215 canisters per year throughout its 10 years of operations. Enhanced chemical cleaning. SRR is relying on a new chemical cleaning process to accelerate tank cleaning with minimal creation of additional waste that must be treated. The current chemical cleaning process to remove residual waste adds oxalic acid with large volumes of water to the tanks. The tank contents are then agitated by mixers that cause the oxalic acid to bind with the waste, and the mixture is then pumped from the tanks to be prepared for vitrification at DWPF. However, the existing process produces large amounts of radioactive water as a byproduct that must be stored in the Savannah River Site tanks and, eventually, treated. In addition, the oxalate in the acid can negatively affect the vitrification process at DWPF. According to SRR, enhanced chemical cleaning is an improvement on the existing process to remove residual waste because the cleaning solution is recirculated and does not increase the volume of waste in the tanks. In addition, enhanced chemical cleaning eliminates oxalates, reducing the impact on DWPF. DOE and SRR officials told us that enhanced chemical cleaning is the cornerstone of their ability to close tanks on schedule and that there will be cascading negative effects on the entire liquid waste system and the rate at which tanks can be closed if the process does not work as planned. To address the challenge of successfully implementing the enhanced chemical cleaning process, DOE and SRR officials told us that the process will be phased into operation. However, this new process is, to date, unproven for use in liquid radioactive waste tanks. In addition, notwithstanding the importance of enhanced chemical cleaning to successful tank closure, DOE did not provide sufficient funding to continue research and development on the process until December 2009, and SRR officials told us that research efforts have been limited due to this lack of funding. As a result, deployment of enhanced chemical cleaning has been delayed from its original January 2011 planned date until sometime in 2013. Nearly all of the experts with whom we spoke agreed that the three challenges DOE officials identified to closing the underground tanks at the Savannah River Site—on-time construction and successful operation of SWPF, increasing the amount and speed at which high-level radioactive waste is vitrified at DWPF, and successfully implementing an enhanced chemical cleaning process—are, in fact, the primary challenges the department faces. Many of these experts stated that they were generally confident of DOE’s ability to overcome these challenges. Those who did not express such confidence told us that, in their view, they lacked sufficient knowledge of the specific conditions DOE faces at the Savannah River Site to assess whether DOE was capable of overcoming these challenges. More than half of the experts we spoke with expressed additional concerns. For example, some of the experts we interviewed told us that they believed that DOE may not be sufficiently considering alternative tank cleaning or waste processing technologies. Three experts expressed concern that DOE was disproportionately relying on enhanced chemical cleaning technologies when, in their view, additional mechanical cleaning technologies may be necessary as well. In addition, one expert recalled a previous situation where DOE relied too heavily on one waste processing technology—called In-Tank Precipitation—to treat waste in the underground tanks. As we reported in 1999, DOE determined the technology would not work as planned after nearly a decade of delays and spending nearly $500 million. Other experts expressed concern that DOE does not have adequate knowledge of the specific characteristics and chemistry of the waste in the tanks. According to these experts, having complete knowledge of the exact characteristics of the waste is important to successfully processing it. In response, DOE officials told us they believe the department is sufficiently considering alternative waste processing technologies, as evidenced by their continued research and development on the rotary microfiltration and small-column ion exchange technologies discussed earlier. In addition, regarding their knowledge of the characteristics of the waste in the tanks, DOE officials told us that it is always possible that there are unknown chemicals in the tanks. However, because of the extensive historic and current sampling of the waste in the tanks, DOE officials expressed confidence in their knowledge of the characteristics and chemistry of the tank waste. The potential for significant cost increases and the possibility of accomplishing one-third fewer tank closures by 2017 than agreed to under the December 2008 contract between DOE and SRR raises concerns about the department’s ability to successfully close the 22 underground liquid radioactive waste tanks that lack secondary containment at the Savannah River Site within DOE’s cost and schedule goals. This concern is based in part on DOE’s inability to produce high-quality cost estimates, an issue we have addressed since 2007 in several reports that contained numerous recommendations. In addition, DOE’s difficulties planning for and mitigating risks in the Savannah River Site’s tank closure project appear to be a continuation of the department’s history of difficulties in contract and project management, as well as the findings of its own root cause analysis of this issue. DOE took nearly 6 months to respond to SRR’s initial report of a $1.4 billion cost increase in June 2009, and SRR had been operating at the Savannah River Site for more than a year by the time the cost estimates were finalized and a contract performance baseline was approved in August 2010. We recognize that much of the potential cost increase is the result of pension plan losses due to economic conditions beyond DOE’s control, and that the department is obligated to pay those benefits under the terms of its contracts. However, DOE lacked adequate guidance to ensure that the contract signed in December 2008—nearly a year and a half after the onset of the economic conditions that led to those losses—accurately reflected increased pension funding requirements. The department also lacked adequate guidance to ensure that the contract included known costs such as labor, fringe benefits, retiree health care, and essential site services costs incurred under the tank closure contract, rather than other contracts DOE manages at the Savannah River Site. In addition, DOE failed to inform SRR about existing guidance regarding how a contractor can request contract cost increases. Moreover, the exemption of the tank closure project from the requirements of DOE Order 413.3A means that the specific policies and procedures DOE will use to oversee the implementation of the tank closure contract and other Office of Environmental Management operations activities are also uncertain. Without certainty as to the policies and procedures that apply, there is no clear approach for management oversight of tank closure at the Savannah River Site, as well as other DOE operations activities. The challenges DOE faces to successfully remove highly radioactive liquid waste from the Savannah River Site’s underground tanks and to then treat the waste and permanently close those tanks are daunting, but experts we spoke with generally agreed that DOE is potentially up to the challenge. However, we share the experts’ concerns that DOE has not engaged in sufficient planning in the event that the department’s chosen waste removal, treatment, and tank closure strategies are unsuccessful. For example, on-time completion of SWPF and its successful operation are vital to DOE’s tank closure plans. Although DOE has made some improvements to the SWPF construction schedule, several shortcomings remain that need to be corrected for it to comply with GAO-identified best practices and DOE’s schedule development guidance. Furthermore, it is important to note that construction delays have already occurred and SRR already estimates that between 2 and 7 fewer tanks than originally planned will be closed by 2017. DOE is in the early stages of planning technologies to mitigate these delays, but it will be several years before these technologies are ready. As a result, we are uncertain whether DOE and SRR will be able to overcome SWPF construction delays soon enough to achieve the contract tank closure goals. In light of continuing uncertainty about the costs and schedule to close underground tanks at the Savannah River Site, we recommend that the Secretary of Energy take the following five actions: Revise department contract management guidance to ensure it includes provisions that detail how contract cost increases should be requested by a contractor and the specific process DOE should undertake to review and approve the increases, along with a timetable for such a review to take place. Revise department contract management guidance to ensure it includes a detailed process by which contract performance baselines are to be reviewed and approved, including appropriate milestones to help ensure that review and approval occur in a timely manner. In the absence of the requirements of Order 413.3A, specify policies and procedures that DOE will use to oversee Office of Environmental Management activities that have been exempted from Order 413.3A, including Savannah River Site tank closure activities. Develop guidance for DOE contracting officers to ensure that known costs incurred by contractors, such as retiree health care and essential site services, are assigned to the proper contract for sites whose operations are divided into multiple contracts. Direct the contractor for the construction of SWPF to revise its construction schedule to ensure conformance with DOE’s schedule development guidance and scheduling best practices found in GAO’s Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. We provided a draft of this report to DOE for its review and comment. In its written comments, DOE agreed with our recommendation that specific policies and procedures are needed to oversee Office of Environmental Management activities that have been exempted from Order 413.3A. DOE partially agreed with our recommendation that department guidance should be revised to ensure it includes a detailed process by which contract performance baselines are to be reviewed and approved, including milestones to help ensure that review and approval occur in a timely manner. However, DOE disagreed with our recommendation to revise DOE contract management guidance to ensure it includes provisions that detail how contract cost increases should be requested by a contractor. In addition, the department disagreed with our recommendation to develop guidance for contracting officers to ensure that known costs incurred by contractors are assigned to the proper contract. Finally, DOE disagreed with our recommendation that the department direct the contractor for the construction of SWPF to revise its construction schedule to ensure conformance with scheduling best practices. Overall, DOE commented that it had significant concerns with the manner in which we framed our discussion and presented our findings. Specifically, DOE stated that our focus on cost and schedule increases associated with the tank closure contract did not take into consideration the cost and schedule improvements the contract represents over the department’s prior tank closure strategy. We disagree. Our draft report noted that the December 2008 contract represented an accelerated schedule of emptying, cleaning, and closing the 22 tanks without secondary containment 5 years sooner than the date in the agreement between DOE, EPA, and South Carolina. Nevertheless, the contract performance baseline that was approved in August 2010 contained a 44 percent increase greater than the cost in the contract. In addition, as our draft report noted, between 2 and 7 fewer tanks may be closed by 2017 than originally called for in the contract. In our view, it is not unreasonable to expect contracts entered into by DOE, or indeed any federal agency, to accurately reflect the costs and schedule to accomplish the goals outlined in the contract. In this case, however, DOE’s contractor identified a $1.4 billion cost increase before performing any work under the contract, DOE ultimately approved a contract performance baseline that contained a $1.4 billion cost increase, and the department took more than a year to approve the baseline once the contractor began work. Even though more than $600 million of this cost increase is due to pension cost increases caused by economic conditions outside of DOE’s control, we believe DOE’s failure to ensure the December 2008 contract accurately reflected increased costs and DOE’s delays approving a contract performance baseline are examples of continued contract mismanagement by the department. DOE agreed with our recommendation that specific policies and procedures are needed for operating activities, including Savannah River Site tank closure activities. DOE commented that a framework for managing and reporting progress for operating activities has been established, and DOE’s sites have been directed that the project management principles contained in DOE Order 413.3A will still apply in a tailored manner. In addition, DOE partially agreed with our recommendation that department guidance should be revised to ensure it includes a detailed process by which contract performance baselines are to be reviewed and approved and include milestones to help ensure that review and approval occur in a timely manner. Specifically, DOE stated that while the department agrees that a timeline is needed to add discipline and rigor to the process for review and approval of contract performance baselines, it already has a rigorous and detailed process established under DOE Order 413.3A. However, as our draft report noted, DOE has exempted many tank closure activities at the Savannah River Site from the full requirements of DOE Order 413.3A. DOE stated that it will expedite the issuance of guidance for contract performance baseline review for operating activities exempted from DOE Order 413.3A. With regard to our recommendation that DOE revise its contract management guidance to ensure it includes provisions that detail how contract cost increases should be requested by contractors and the specific process DOE should undertake to review and approve the increases, DOE commented that such guidance would be inappropriate to include in departmental policy and redundant to contract clauses required by Federal Acquisition Regulations. While we acknowledge that the contract incorporates certain Federal Acquisition Regulations-mandated contract clauses on this subject, we continue to believe departmental contract management guidance should be revised to ensure it includes provisions that detail how contract cost increases should be requested by a contractor. For example, the contract clause titled “Notification of Changes,” which was incorporated by reference into the contract, says that changes must be requested in writing, but does not specify procedures for submitting the written request. As we noted in the report, the department already has guidance that states that a contractor should not be allowed to change estimated contract costs by simply including a higher cost in the contract performance baseline, but the guidance was not followed by either SRR or DOE. In addition, the guidance mentioned by DOE in its comments that established a 180-day contract administrative lead time requirement for resolving contract change requests does not provide information as to how contractors are to submit changes in order to trigger the 180-day review period. Moreover, DOE noted that SRR required additional contract clarification guidance to comply with the contract provisions at issue. Furthermore, SRR officials told us that there was a miscommunication between DOE and SRR regarding the process to request a contract cost increase. As a result, we continue to believe more clarity in DOE guidance is necessary. DOE did not agree with our recommendation to develop guidance for contracting officers to ensure that known costs incurred by contractors are assigned to the proper contract. The department noted that the majority of the cost increases we identified in the report are associated with fluctuating indirect costs mainly due to economic conditions beyond either the department’s or the contractor’s control, and that these cost fluctuations are not related to project performance. As our draft report noted, we agree that a significant amount of the cost increase—more than $600 million in pension costs—is due to economic conditions beyond DOE’s control. Nevertheless, DOE had nearly a year and a half after the onset of the economic conditions to ensure that the contract accurately reflected increased pension costs. We have also modified our draft report to acknowledge that the $270 million contract cost increase associated with retiree health care and essential site services does not represent an increased cost to the taxpayer because these costs would be eliminated from the management and operations contract at the Savannah River Site. However, this $270 million still represents an unplanned increase in the costs associated with the tank closure contract. As discussed previously, we believe contracts entered into by DOE should accurately reflect the costs to accomplish the goals outlined in the contract. Therefore, we maintain that guidance to ensure appropriate allocation of costs between contracts at sites whose operations are divided into multiple contracts is necessary. Regarding our recommendation that DOE should direct the contractor for the construction of SWPF to revise its construction schedule to ensure conformance with scheduling best practices, the department commented that the contractor has developed and maintains a schedule that exhibits best practices included in industry standards such as GAO’s Cost Estimating and Assessment Guide. We disagree. As we noted in our draft report, based upon our analysis of the SWPF construction schedule in both March and May 2010, DOE has made some improvements to the SWPF schedule, but shortcomings remain. In particular, both of our assessments found that the schedule had problems with excess float time, which indicates that the schedule’s activities are not sequenced logically. DOE believes that having long float times is appropriate for the schedule’s current level of maturity. We disagree. In our view, including in the schedule activities that could slip by up to 3 years and not impact the project’s overall end date is not realistic and does not meet scheduling best practices. However, we are encouraged that DOE and the contractor will continuously assess the schedule against best practices to ensure that float time is appropriately managed. DOE also provided technical comments that we incorporated in the report as appropriate. DOE’s written comments are presented in appendix III. We are sending copies of this report to the appropriate congressional committees; the Secretary of Energy; the Director, Office of Management and Budget; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the current costs and schedule for closing the tanks at the Savannah River Site and the extent to which the Department of Energy (DOE) established these using best practices, we analyzed cost and schedule documents such as DOE’s June 2007 tank closure cost estimate, the December 2008 tank closure contract between DOE and Savannah River Remediation, LLC (SRR), SRR’s contract performance baseline, and tank closure cost increase proposals. To determine the extent to which DOE established tank closure costs using best practices, we compared both the process by which DOE developed these documents, as well as evidence collected through interviews with DOE—at the Savannah River Site and in Washington, D.C.—and SRR officials, to GAO-identified best practices for cost estimating. We also analyzed DOE’s contract management plan and interviewed DOE officials responsible for administering the tank closure contract to determine the process the department is employing to review and approve SRR’s proposed cost increases, and compared this process to the steps contained in DOE guidance on how proposed contract cost increases should be prepared, reviewed, and approved. To determine the extent to which DOE established the tank closure schedule using best practices, we reviewed the construction schedule for the Salt Waste Processing Facility (SWPF), a facility that will be used to treat a majority of the tank waste. Specifically, with the assistance of scheduling experts, we evaluated the reliability of the SWPF construction schedule to determine the extent to which it captures key activities, is correctly sequenced, establishes the duration of key activities, is integrated, and has an established reliable critical path, among other things. We conducted an initial assessment in March 2010 and shared the results of this assessment with DOE and contractor officials. We based our assessment on GAO-identified best practices associated with effective schedule estimating, many of which are also identified by DOE in its guidance on establishing performance baselines. We then interviewed DOE and contractor officials to obtain information on how the SWPF construction schedule is developed and maintained. We then conducted a second assessment in May 2010 to evaluate the extent to which the schedule improved in its adherence to GAO-identified best scheduling practices. To determine the primary challenges DOE faces to close the Savannah River Site’s liquid radioactive waste tanks and the steps the department has taken to address them, we interviewed DOE officials and asked them to identify the primary challenges the department faces and the steps DOE has taken to address them. We also reviewed past and current tank closure plans and risk management documents, toured Savannah River Site facilities relevant to tank closure, and attended DOE and SRR briefings on components of the Savannah River Site’s liquid radioactive waste system and the proposed modifications to the system to gain an understanding of these challenges and how DOE planned to address them. To corroborate that DOE had identified the primary challenges to tank closure, we interviewed 11 experts—all of whom have extensive knowledge of tank closure-related activities—and solicited their views on the primary tank closure challenges. We identified these experts in consultation with various sources including the National Academy of Sciences and the South Carolina Governor’s Nuclear Advisory Council, and using GAO’s prior work on tank closure activities at DOE’s Hanford Site in Washington State. We then contacted these individuals and asked for additional referrals. We continued this iterative process until additional interviews did not lead us to any new names or we determined that the qualified experts in this field had been exhausted. We then asked these individuals questions to determine the nature and extent of their expertise, and to ensure that they were not currently or recently employed by DOE or SRR. The final list of experts included primarily university professors and consultants. We developed a semistructured interview guide, containing both closed- and open-ended questions, to solicit responses about the primary challenges DOE identified to close the Savannah River Site’s tanks and the steps DOE proposed to address the identified challenges. Using the guide, we interviewed each expert by telephone. Because some of the questions were open-ended, and experts were knowledgeable about varied––but not all––aspects of the issues covered, we did not attempt to quantify their responses to these questions for reporting purposes. In addition, we interviewed multiple entities that are stakeholders in the tank closure process, including the U.S. Nuclear Regulatory Commission, the Environmental Protection Agency, the Defense Nuclear Facilities Safety Board, and the South Carolina Department of Health and Environmental Control, to obtain their views on the challenges DOE faces to close the Savannah River Site’s tanks and the steps the department is taking to address these challenges. We conducted this performance audit from June 2009 to September 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Capturing all activities The schedule should reflect all activities as defined in the program’s work breakdown structure, including activities to be performed by both the government and its contractors. The schedule lists activities in a careful and complete manner using consistent language. Also, the schedule contains a wide scope of activities, such as activities related to design, procurement, fabrication, and installation. The schedule is comprised of 9,177 activities. No change from previous assessment. See “GAO’s March 2010 assessment” column for details. The schedule now has 11,291 activities, an increase of 23 percent. The schedule should be planned so that it can meet critical program dates. To meet this objective, activities need to be logically sequenced in the order in which they are to be carried out. In particular, activities that must finish prior to the start of other activities (i.e., predecessor activities), as well as activities that cannot begin until other activities are completed (i.e., successor activities), should be identified. By doing so, interdependencies among activities that collectively lead to the accomplishment of events or milestones can be established and used as a basis for guiding work and measuring progress. The schedule should avoid logic overrides and artificial constraint dates that are chosen to create a certain result. The dates of milestones and activities are mostly determined by the durations and predecessor- successor logic. However, we identified multiple problems with the schedule’s logic, use of constraints, and use of lags that keep the schedule from meeting this best practice. The schedule still has a number of instances with incomplete logic and constraints. The schedule now contains more instances of incomplete logic than the one we assessed in March. Specifically, there are now 539 instances of incomplete logic, which is an increase of almost 20 percent. The schedule contains multiple instances of incomplete logic, also called open ends, in which predecessor and successor activities are not properly linked. For example, we found 450 instances of incomplete logic of which 409 were instances where activities were not linked to predecessor activities; this fact leads to reduced confidence in the schedule’s ability to meet its completion date. The schedule’s use of constraints has been reduced, but not eliminated. Specifically, we found that the number of constrained tasks decreased from more than 800 to 158. The schedule now contains 101 activities with lags more than 100 days. This is an increase of 68 percent. In addition, the schedule makes excessive use of constraints, which are used instead of logically-linked predecessor activities to start activities. We identified 831 activities that are constrained to start as late as possible—meaning that even if the activity’s duration takes one day longer than estimated, its successor activity will be delayed. According to best scheduling practices, the schedule should use logic and durations to reflect realistic start and completion dates for project activities. The schedule also makes extensive use of lags, which are the duration between activities that delay successor activities. Lags should be used to represent fixed, physical gaps between activities such as the time needed for concrete to cure. The lags used in the schedule are both too many in number and too long in duration to represent the physical gaps. Specifically, we found 60 instances where the lag was more than 100 days in duration. The schedule should reflect what resources (e.g., labor, material, and overhead) are needed to do the work, whether all required resources will be available when needed, and whether any funding or time constraints exist. The schedule contains multiple resources and their application to various activities was carefully done. The schedule contains the project’s total cost. No change from previous assessment. See “GAO’s March 2010 assessment” column for details. The schedule should realistically reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, historical data, and assumptions used for cost estimating should be used. Durations should be as short as possible and have specific start and end dates. In particular, durations of longer than 200 days should be minimized. The schedule contains a significant number of activities with long durations, especially when the durations are compared to the remaining duration of the entire project. We found 627 activities whose duration is greater than 5 percent of the schedule’s total remaining duration. Any activities with durations of greater than 5 percent of the schedule’s total remaining duration should be examined closely to see if it is possible to schedule the activities in smaller increments to improve the management of those activities. The schedule continues to contain many items with long durations that appear as if they could be shortened. We identified 827 activities whose duration is greater than 5 percent of the schedule’s total remaining duration. This is an increase of 32 percent, a greater increase than the overall number of activities increased. The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with other sequenced activities. These links are commonly referred to as handoffs and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and subtasks. Such mapping or alignment among levels enables different groups to work to the same master schedule. The schedule is partially horizontally integrated. This is the result of the problems identified in the “sequencing all activities” practice related to incomplete logic, as well as the use of constraints and lags. The schedule is partially horizontally integrated due to the continued instances of incomplete logic and the use of constraints. The schedule continues to be vertically integrated. The schedule is vertically integrated as it includes filters that allow summary or milestone schedules to be developed from the master schedule. Using scheduling software, the critical path—the longest duration path through the sequenced list of key activities—should be identified. The establishment of a program’s critical path is necessary for examining the effects of any activity slipping along this path. Potential problems that might occur along or near the critical path should also be identified and reflected in the scheduling of the time for high-risk activities. The schedule contains a distinct critical path, but we identified problems with it. Specifically, the critical path’s initial activity has an excessively long duration, which makes it difficult to accurately measure the progress being made to complete the activity. Further, there is one instance of incomplete logic on the critical path. The schedule contains a distinct critical path, and it is different from the one presented in March 2010. We identified no problems with the revised critical path. The schedule should identify float time—the time that a predecessor activity can slip before the delay affects successor activities—so that schedule flexibility can be determined. As a general rule, activities along the critical path typically have the least amount of float time. Total float time is the amount of time flexibility an activity has that will not delay the project’s completion (if everything else goes according to plan). Total float that exceeds a year is unrealistic and should be minimized. The schedule contains an excessive amount of activities with too much float, which indicates that activities are not linked using logic. Specifically, the schedule contains 272 activities with more than 500 days of float and two construction activities involving fabrication of piping—usually critical in construction projects—that had more than 1,000 days of float time and neither activity was linked to a successor activity. The schedule continues to have several activities with excessive amounts of float. We identified more than 433 activities with more than 500 days of float. This is an increase of 59 percent, a greater increase than can be explained by the overall increase in activities. There are 22 activities with more than 1,250 days of float. As such, the department is unable to realistically determine how much an activity can slip before it impacts the end date. In this case, those activities could slip by up to 3 years and not impact the overall end date. A schedule risk analysis should be performed using statistical techniques to predict the level of confidence in meeting a program’s completion date. This analysis focuses not only on critical path activities, but also on activities near the critical path, since they can potentially affect program status. The schedule contains reserve time—a buffer for the schedule baseline—but there is no evidence that this reserve time was based on a risk analysis using data about project schedule risk or statistical techniques, as required by best practices. Reserve time was established largely using empirical methods, but the methods lacked the use of a rigorous statistical technique required by best practices. DOE and contractor officials presented evidence that they conducted a schedule risk analysis based on data about the project schedule risk, specifically a risk management plan. However, DOE used a less rigorous statistical technique than the one specified in best practices. Specifically, DOE’s statistical technique does not fully account for potential changes to the order in which activities are sequenced. Given that our May 2010 assessment found continued problems with how activities are sequenced—the schedule remained only partially compliant with this best practice—the potential for changes to the order in which activities occur is possible. It is unclear if DOE’s current schedule risk analysis would remain valid should these potential changes materialize. The schedule should be continually monitored to determine when forecasted completion dates differ from the planned dates, which can be used to determine whether schedule variances will affect downstream work. Individuals trained in critical path method scheduling should be responsible for ensuring that the schedule is properly updated. Maintaining the integrity of the schedule logic is not only necessary to reflect true status, but is also required before conducting a schedule risk analysis. The techniques used to measure progress on the schedule, such as containing current budget information, is consistent with standard scheduling best practices, but the multiple instances of incomplete logic mean the schedule partially meets best practices. A schedule must have all its activities logically sequenced in the order that they are to be carried out to provide reasonable and accurate forecasts. Discussions with DOE’s scheduler indicated that the schedule is updated monthly, in accordance with best practices, but continued instances of incomplete logic call into question the schedule’s overall accuracy. During our initial assessment of the SWPF construction schedule in March 2010, we inadvertently incorporated activities unrelated to SWPF into the assessment, which produced inaccurate statistics. To correct this, we obtained a new copy of the SWPF schedule and DOE officials agreed that the general conclusions of our March 2010 assessment are still considered valid. In addition to the individual named above, Ryan T. Coles, Assistant Director; Patrick Bernard; Robert Campbell; Antoinette Capaccio; Kathryn Edelman; Jennifer Echard; Tim Persons; John Smale Jr; and Michelle K. Treistman made key contributions to this report.
Decades of nuclear materials production at the Department of Energy's (DOE) Savannah River Site in South Carolina have left 37 million gallons of radioactive liquid waste in 49 underground storage tanks. In December 2008, DOE entered into a contract with Savannah River Remediation, LLC (SRR) to close, by 2017, 22 of the highest-risk tanks at a cost of $3.2 billion. GAO was asked to assess: (1) DOE's cost estimates and schedule for closing the tanks at the Savannah River Site, and (2) the primary challenges, if any, to closing the tanks and the steps DOE has taken to address them. GAO visited the Savannah River Site and reviewed tank closure documents, as well as conducted an analysis of the construction schedule of the Salt Waste Processing Facility (SWPF), which is a facility vital to successful tank closure because it will treat a large portion of the waste removed from the tanks. Emptying, cleaning, and permanently closing the 22 underground liquid radioactive waste tanks at the Savannah River Site is likely to cost significantly more and take longer than estimated in the December 2008 contract between DOE and SRR. Originally estimated to cost $3.2 billion, SRR notified DOE in June 2010 that the total cost to close the 22 tanks had increased by more than $1.4 billion or 44 percent. Much of this increase is because DOE's cost estimate in the September 2007 request for proposals that formed the basis of the December 2008 contract between DOE and SRR was not accurate or comprehensive. For example, DOE underestimated the costs of labor and fringe benefits. DOE also omitted certain other costs related to equipment and services needed to support tank closure activities. Moreover, more than $600 million of this increase is due to increased funding needed to make up for significant losses suffered by Savannah River Site workers' pension plans as a result of the recent economic crisis. Closing the tanks may also take longer than originally estimated because of persistent delays and shortcomings in the construction schedule for SWPF. According to SRR, construction delays that have already occurred will result in between 2 and 7 fewer tanks being closed by 2017 than agreed to in the contract. Furthermore, the SWPF construction schedule does not fully meet GAO-identified best scheduling practices. For example, the schedule had problems with excess float time between activities, indicating that the schedule's activities may not be sequenced logically. DOE is exploring ways to mitigate the effects of construction delays by deploying new technologies to treat radioactive waste. However, additional research and development on these new technologies is still required and, therefore, it will be several years before they are deployed. DOE officials identified three primary challenges to closing the liquid radioactive waste tanks at the Savannah River Site: (1) on-time construction and successful operation of SWPF; (2) increasing the amount and speed at which radioactive waste is processed at the Savannah River Site's Defense Waste Processing Facility, which prepares the waste for permanent disposal by mixing it with molten glass and then pouring it into large metal canisters where it hardens; and (3)successful implementation of an enhanced chemical cleaning process that will remove residual waste from the tanks with minimal creation of additional waste that must be treated. DOE officials identified steps the department is taking to ensure these challenges are met. However, several factors raise concerns about whether DOE will be able to resolve them. For example, the enhanced chemical cleaning process that is a cornerstone of SRR's ability to close tanks on time has never been used in liquid radioactive waste tanks and, according to SRR officials, DOE has not consistently funded additional research and development on the technology. Most experts GAO spoke with were generally confident of DOE's ability to successfully overcome these challenges, although some of them identified additional concerns. For example, some experts suggested that DOE has not engaged in sufficient contingency planning in the event that the department's chosen waste removal, treatment, and tank closure strategies are unsuccessful. GAO is making five recommendations to DOE to, among other things, clarify how cost increases should be requested by a contractor, as well as reviewed and approved by DOE and to ensure the SWPF construction schedule conforms to best practices. Although DOE generally agreed with two of our recommendations, they disagreed on the necessity of additional clarity on how cost increases should be requested by a contractor and that the SWPF construction schedule did not conform to best practices. We continue to believe our recommendations are valid.
Distribution of materiel, such as supplies and equipment, into and around Afghanistan is a complex process involving many DOD organizations and utilizing both surface and air modes of transportation over various routes. DOD’s ability to provide timely logistics support to units deploying to Afghanistan or already in theater depends on its ability to synchronize these activities into one seamless process. According to joint doctrine, distribution is the operational process of synchronizing all elements of the logistic system to deliver the “right things” to the “right place” at the “right time” to support the joint force. As the list below indicates, numerous organizations play an integral role in ensuring the delivery of materiel to support operations in Afghanistan: U.S. Transportation Command is designated as the distribution process owner for DOD. As such, it coordinates transportation programs for all organizations involved in moving supplies and equipment into Afghanistan for DOD. It relies on its military service components—Air Mobility Command (Air Force), Military Sealift Command (Navy), and Surface Deployment and Distribution Command (Army)—to provide mobility assets, such as aircraft, ships, and trucks, and to execute the movement of materiel. In addition, U.S. Transportation Command collaborates with the combatant commanders, military services, defense agencies, Office of the Secretary of Defense, and Joint Staff to develop and implement distribution process improvements. U.S. Forces-Afghanistan establishes priorities for movement of materiel for the Afghanistan theater. Joint Sustainment Command-Afghanistan provides command and control of logistics efforts within Afghanistan to execute U.S. Forces- Afghanistan priorities, including assisting with materiel reception and movement and with asset visibility. Army Central Command’s 1st Theater Sustainment Command provides command and control of logistics efforts within the U.S. Central Command area of operations by monitoring strategic movements of materiel and directly influencing movements into theater. Air Force Central Command’s Air Mobility Division plans, coordinates, tasks, and executes the movement of materiel using air assets within theater. The Central Command Deployment and Distribution Operations Center bridges the gap between strategic and theater distribution by validating and directing air movements and monitoring and directing surface movements within theater. A combination of surface and air transportation modes are used to move supplies and equipment into and around Afghanistan. According to U.S. Transportation Command officials, most supplies and equipment bound for Afghanistan are transported along surface modes, with the remaining supplies and equipment transported using airlift. The main surface route uses commercial ships to transport cargo to the seaport of Karachi, Pakistan, from which it is trucked by contractors into Afghanistan. Typically, materiel that crosses the northern border at Torkham is destined for the logistics hub at Bagram, while materiel that crosses the southern border at Chaman is destined for the Kandahar logistics hub. The distances from the port of Karachi to Bagram and Kandahar are approximately 1,210 miles and 690 miles, respectively. Unit equipment— such as specific vehicles and materiel owned by the unit and brought from home stations—and sustainment materiel—such as food, water, construction materials, parts, and fuel that are requisitioned by units already deployed—are transported through Pakistan. In May 2009, DOD began using an alternative surface route, known as the Northern Distribution Network, which relies on contracted ships, railways, and trucks to transport nonlethal sustainment items like construction materiel through western European and central Asian countries into Afghanistan. The cargo, originating in the United States and northern Europe, falls in with the normal flow of commerce that travels along several routes within the Northern Distribution Network. There are two main routes within this network: one starts at the Latvian port of Riga or the Estonian port of Tallinn and connects with Afghanistan via Russia, Kazakhstan, and Uzbekistan; the second route starts at the Georgian port of Poti, bypasses Russia, and reaches Afghanistan through the terrains of Azerbaijan, Kazakhstan, and Uzbekistan. U.S. Transportation Command is currently considering the development of additional Northern Distribution Network routes to transport materiel into Afghanistan. Currently, the surface routes through Pakistan are used to a greater extent than those of the Northern Distribution Network because the latter is a less mature surface route and the Pakistani ground routes entail fewer limitations on the types of cargo that can be transported. For example, U.S. Transportation Command reported that from May through November 2009, more than 4,700 20-foot–equivalent units were transported into Afghanistan by way of the Northern Distribution Network, but more than 21,500 20-foot–equivalent units were transported using the Pakistani surface routes. The Northern Distribution Network could, however, support the movement of significantly more cargo, with a maximum capacity estimated at around 4,000 20-foot–equivalent units per month. Military and commercial airlift are used to transport high-priority supplies and equipment, as well as sensitive items, such as weapon systems and ammunition, into and around Afghanistan. According to U.S. Forces- Afghanistan, as of December 2009, there were 24 airfields in Afghanistan, 4 of which could support C-5 aircraft and 6 of which could support C-17 aircraft. These aircraft are used to move large quantities of supplies and equipment. Cargo flown into Afghanistan is typically flown to a logistics hub, such as Bagram or Kandahar, that is capable of supporting most types of aircraft. According to Air Mobility Command data, during fiscal years 2008 and 2009, approximately 81,600 and 170,000 short tons of cargo, respectively, were flown into Afghanistan. Supplies and equipment shipped to the logistics hubs may subsequently be transported to units operating at other forward operating bases or combat outposts using a combination of surface and air transportation modes. Within Afghanistan, cargo is moved to forward operating bases primarily by means of contractor-operated trucks, though military trucking assets are used in some instances. High-priority and sensitive materiel, such as ammunition, that needs to be transported by air is loaded onto smaller aircraft and flown to a forward operating base or air-dropped to units throughout the country. DOD has taken some steps to improve its processes for distributing materiel to deployed forces based on lessons learned from prior operations, such as Operation Iraqi Freedom. We reported in August 2005 that two DOD initiatives for improving supply distribution operations—the establishment of the Central Command Deployment and Distribution Operations Center and the use of pure packing (that is, consolidation of cargo for shipment to a single user) for air shipments—were successful enough to warrant application to future operations. In conducting our ongoing work reviewing DOD’s logistics efforts supporting operations in Afghanistan, we found that these initiatives continue to benefit supply distribution efforts in support of operations in Afghanistan. According to officials, both these initiatives have helped improve the flow of supplies into and around the Afghanistan theater of operations. During Operation Iraqi Freedom, senior commanders were unable to prioritize their needs and make decisions in the early stages of the distribution process because they did not know what materiel was being shipped to them, resulting in an overburdened transportation and distribution system. To address these issues, in January 2004, U.S. Transportation Command established the Central Command Deployment and Distribution Operations Center, in part to help coordinate the movement of materiel and forces into the theater of operations, including both Iraq and Afghanistan, by confirming the combatant commander’s deployment and distribution priorities and by synchronizing the forces, equipment, and supplies arriving in theater with critical theater lift and theater infrastructure limitations. Based on the success of the Central Command Deployment and Distribution Operations Center, DOD created similar deployment and distribution operations centers within each of the geographic combatant commands. Pure packing has similarly improved DOD’s efficiency. During the early stages of Operation Iraqi Freedom, the use of mixed pallets of cargo created inefficiencies because they had to be unpacked, sorted, and repacked in the theater of operations before they were shipped forward, thus lengthening the time it took to deliver supplies to troops. To avoid these extra processes, in January 2004, U.S. Central Command requested that all air shipments entering its area of responsibility be pure packed, meaning that all cargo in a pallet is addressed to the same customer location. To maximize pallet and aircraft utilization, cargo awaiting shipment can be held for up to 5 days for the Army and up to 3 days for the Marine Corps. Cargo is palletized either when it reaches 120 hours of port hold time or when enough cargo is available to fill a pallet, based on size or weight limits. As we reported in April 2005, the use of pure packing potentially leads to longer processing times at the originating aerial ports, but it reduces customer wait time in theater, thus providing a significant advantage. DOD has also established policies and procedures to increase the use of RFID tags to improve in-transit visibility over cargo. In December 2003, we reported that DOD did not have adequate visibility over all supplies and equipment transported to, within, and from the theater of operations for Operation Iraqi Freedom, in part because RFID tags were not being used in a uniform and consistent manner. In July 2004, DOD issued policy directing all DOD components to use RFID tags on all cargo shipments moving to, from, or between overseas locations. Additionally, U.S. Central Command policy states that RFID tags must be attached to all unit and sustainment cargo transported to, within, and from U.S. Central Command’s theater of operations. U.S. Central Command issued further guidance requiring RFID tags with intrusion-detection capabilities to be affixed to containers carrying unit equipment along the Pakistani ground routes. Some interrogators have been installed within Pakistan to obtain electronic information from RFID tags as privately contracted trucks transporting DOD cargo pass by. Officials told us that as a result of these policies and procedures, the use of RFID tags and DOD’s visibility over cargo have increased significantly since early operations began in Iraq. However, we have found that DOD’s visibility over surface movements of supplies and equipment into and around Afghanistan remains limited, as is discussed below. Based on our preliminary observations, we note several challenges that hinder DOD’s ability to distribute needed supplies and equipment to U.S. forces operating in Afghanistan. These challenges include difficulties with transporting cargo through neighboring countries and around Afghanistan; limited airfield infrastructure within Afghanistan; lack of full visibility over supply and equipment movements into and around Afghanistan; limited storage capacity at logistics hubs in Afghanistan; difficulties in synchronizing the arrival of units and equipment in Afghanistan; lack of coordination, as well as competing logistics priorities, in a coalition environment; and uncertain requirements and low transportation priority for contractors. DOD has ongoing or planned efforts to help mitigate some of these challenges. In addition, DOD is working to address these challenges through planning conferences to synchronize the flow of forces into Afghanistan. While some of DOD’s efforts will promptly improve its ability to efficiently distribute supplies and equipment to U.S. forces in Afghanistan, other efforts involve long-term plans that will not be completed in time to support the ongoing troop increase that is scheduled to occur by August 2010. The supply routes through Pakistan, along the Northern Distribution Network, and around Afghanistan each present unique difficulties in transporting supplies and equipment. DOD’s ability to support both current operations and the ongoing troop increase in Afghanistan is challenged by restrictions on the number of trucks allowed to cross into Afghanistan daily. Because no U.S. military transportation units operate in Pakistan, DOD must rely solely on private contractors to transport supplies and equipment along ground routes through the country and to provide security of the cargo while in transit. Privately contracted trucks can transport cargo through Pakistan via two routes: the northern, which crosses into Afghanistan at the border town of Torkham, and the southern, which crosses at the border town of Chaman. While Pakistan does not limit the number of trucks that cross the border at Torkham, it does limit the number allowed to cross at Chaman to 100 total per day. U.S. Forces- Afghanistan and Surface Deployment and Distribution Command officials told us that they requested greater security at the Chaman border crossing after insurgent attacks occurred near the border crossing in 2009. In response, restrictions were placed on the number of trucks allowed to cross per day at Chaman, which include trucks transporting cargo in support of U.S. forces operating in Afghanistan. Officials added that there is often a backlog of trucks waiting to cross at the Chaman border because of the restrictions. As a result, these backlogged trucks may sometimes be unable to deliver their cargo and subsequently return to the port of Karachi to pick up additional supplies and equipment in a timely manner. The U.S. government is currently negotiating with the Pakistani government to increase the flow of trucks through the Chaman border crossing. The restrictions at the Chaman border crossing and the resulting impact on the number of available trucks in Pakistan help contribute to a regular backlog of cargo at the port of Karachi. According to Army Central Command, nearly half of the cargo waiting to be picked up at Karachi resides there for several weeks. Officials stated that unit equipment arriving at Karachi often receives the highest transportation priority. While unit equipment is essential for U.S. forces to conduct their mission, sustainment items are also necessary, as they enable forces to maintain and prolong their operations. If sustainment and other types of cargo become backlogged at Karachi, U.S. forces may not receive the supplies and equipment they need in a timely manner to complete or sustain their mission. According to U.S. Transportation Command, two methods for mitigating the effects of backlogs at the port of Karachi are prioritizing cargo flow and increasing the amount of supplies kept on hand in Afghanistan. Limitations on what items can be transported through Pakistan and the amount of damage sustained by cargo transiting through Pakistan also can delay the delivery of necessary supplies and equipment to U.S. forces in Afghanistan. Private trucking contractors do not transport sensitive equipment on the Pakistani ground routes. Instead, such equipment must be flown into Afghanistan and then be installed onto the vehicles in Regional Command–East. Additionally, according to Army Central Command, approximately 80 percent of cargo transiting through Pakistan arrives in Afghanistan with some level of damage, which, officials noted, can occur because of a number of factors, including poor roads, rough terrain, extreme weather, or insurgent and other individual attacks. For example, U.S. military vehicles may arrive with missing or damaged engines, slashed fuel lines and empty fuel tanks, broken mirrors or windows, and deflated tires, according to Army officials. The additional time needed to repair equipment arriving in Afghanistan further delays delivery to U.S. forces. A small percentage of cargo transported along the Pakistani ground routes is pilfered by insurgents and other individuals, but the exact amount of pilferage is difficult to determine because of limitations in the way it is reported.According to DOD officials, approximately 1 percent of cargo transported on the Pakistani ground routes is pilfered. While the percentage may be relatively small, officials stated that it represents a significant loss of money to DOD and a potential risk to the warfighter until replacements for the pilfered items can be requisitioned and delivered. Because of the lack of U.S. military transportation units operating in Pakistan, DOD cannot immediately address pilferage when and where it occurs in Pakistan. In cases where active RFID tags are damaged or removed when the cargo is pilfered, officials stated that DOD can attempt to determine the approximate area where the pilferage took place based on the last RFID tag signal obtained by an interrogator inside Pakistan. Additionally, some RFID tags have intrusion-detection capabilities that provide information on when and where the cargo has been broken into. With this information, DOD can negotiate with the private trucking contractors to avoid transporting cargo through locations inside Pakistan where equipment may be more susceptible to pilfering. The Northern Distribution Network is an important alternative to the surface routes through Pakistan, but several logistical and cargo clearance challenges exist that could limit the amount of cargo transported on its routes. For example, Northern Distribution Network route transit times, on average, exceed the Pakistani surface route transit times. Cargo transiting along the northern route takes approximately 86 days to travel from the source of supply in the United States or northern Europe to its destination in Afghanistan, and the southern route takes approximately 92 days. Comparatively, it takes only about 72 days to transport cargo along the Pakistani surface routes. Additionally, DOD and its contractors must request and obtain clearance before cargo can transit through Uzbekistan, a process that should take 20 days to complete. This has been shortened from 30 days to 20 days, and according to U.S. Transportation Command officials, they are working to make this delay shorter. Given the long lead times to deliver cargo and the 20-day notice needed to ship cargo through Uzbekistan, DOD must plan well in advance to ensure that the necessary supplies and equipment arrive in Afghanistan when they are needed to support the warfighter. Furthermore, there are restrictions on the types of cargo that can be transported through the countries along the Northern Distribution Network. Specifically, only nonlethal supplies and equipment can be shipped on the Northern Distribution Network, and DOD primarily transports nonlethal sustainment supplies on the route. These restrictions constrain DOD’s ability to transport certain classes of supply or types of equipment on the Northern Distribution Network as an alternative to the more expensive airlift or the limited capacity of the Pakistani surface routes. Private trucking contractors operating under the Afghan Host Nation Trucking Contract carry the majority of U.S. supplies and equipment within Afghanistan, but officials told us that limitations on the available number of contractors and reliable trucks may impede DOD’s ability to support the ongoing troop increase. Officials stated that approximately 90 percent of cargo is transported within Afghanistan by private contractors, and the remaining 10 percent by U.S. military trucks. In addition to affecting the time it takes to transport cargo to the warfighter, officials believe that limited contractor availability affects the quality of service. Contractors in Afghanistan may have little incentive to offer superior performance when they can expect to continue receiving contracts because of the high demand and limited supply of host nation trucking contractors. Additionally, officials told us that some privately contracted trucks may be unable to safely transport cargo because they are either in too poor a condition to operate or do not have the capability to transport the type or size of cargo. In cases where the contracted trucks are unable to provide adequate transportation, DOD must find an alternative method to deliver the cargo to its destination—for example, by using a different private contractor or by transporting the cargo on a U.S. military truck. Identifying an alternate mode of transportation could delay the delivery of needed supplies and equipment to U.S. forces. According to Army logistics officials in Afghanistan, DOD is in the process of increasing the number of contractors performing under the Afghan Host Nation Trucking Contract operating in southern and western Afghanistan. Attacks on cargo being transported through Pakistan and Afghanistan can also hinder DOD’s ability to provide supplies and equipment to U.S. forces in Afghanistan. As noted above, DOD relies on private contractors to transport all cargo through Pakistan and most of the cargo transported through Afghanistan. There is no U.S. military-provided security for the transport of the cargo; shipping contractors provide their own security. Trucks moving along the ground routes through Pakistan and Afghanistan, as well as those stopped at terminals and border crossings, can be targets for attack. For example, for 2 consecutive days in March 2009, militants attacked two truck terminals in Peshawar, Pakistan, damaging or destroying 31 vehicles and trailers. Our previous work found that DOD reported that in June 2008 alone, 44 trucks and 220,000 gallons of fuel were lost because of attacks or other events. Limited airfield infrastructure and capability within Afghanistan constitutes one of the most difficult challenges DOD faces as it deploys and sustains the increasing number of U.S. forces in the country, according to numerous DOD officials we interviewed. DOD airlifts into Afghanistan a significant amount of cargo, including high-priority items as well as sensitive equipment that cannot be transported through Pakistan or on the Northern Distribution Network. However, the small number of airfields in Afghanistan and the limited types of aircraft that can land at these airfields may constrain DOD’s ability to deliver certain supplies and equipment within expected time frames. Bagram Airfield, Kandahar Airfield, and Bastion Airfield are the three primary airfield hubs in Afghanistan capable of handling large volumes of cargo and a variety of different types of aircraft. Bagram and Kandahar have the capability to land large C-5 and C-17 aircraft as well as the smaller C-130 aircraft, while Bastion can land C-17s and C-130s. DOD often relies on large aircraft, such as the C-17, to fly supplies and equipment directly from the United States, Kuwait, Qatar, and other major distribution points into Afghanistan, but it is limited to the small number of airfields where these aircraft can land. Instead of flying directly to a smaller airfield, a large aircraft must first land at an airfield hub, where its cargo is unloaded, reloaded onto a smaller aircraft, such as the C-130, and then flown to the smaller airfield. This process takes considerably more time than flying directly to the final destination and, as a result, may delay the delivery of supplies and equipment to the warfighter. Officials stated that the situation will likely grow more challenging as the demand for cargo increases along with the additional U.S. forces arriving in Afghanistan. According to U.S. Transportation Command, there are projects under way or that have been completed to expand airfield capacity in Afghanistan. For example, officials at Kandahar Airfield are planning to build ramp space that can park an additional two C-5 and eight C-130 aircraft. However, other planned or ongoing projects to expand airfield capacity will not be completed in time to support the ongoing troop increase, according to Air Force officials. Airfields also have only limited space available for aircraft to park after landing, and sometimes reach capacity. For example, Bagram has the capacity to park up to one C-5 equivalent and four C-17 equivalents at the same time. Additionally, officials stated that the current number of aerial port workers and quantity of materiel-handling equipment at the airfields in Afghanistan may be insufficient to keep pace with the increased amounts of cargo being flown into the country to support the ongoing troop increase. The number of aerial port workers and quantity of materiel-handling equipment at the airfield determine how quickly parked aircraft can be unloaded, have their cargo processed, and be serviced and refueled in order to depart the airfield and allow additional incoming aircraft to land. Ideally, airfields would have the capability to unload, process, and service and refuel all of the aircraft parked at the airfield at the same time, but this is not always the case. For example, Bagram has the capability to work on up to one C-5 equivalent and three C-17 equivalents at a time, even though it has capacity to park one additional C- 17. Consequently, aircraft that land and park at an airfield with limited aerial port worker and materiel-handling equipment availability may not have their cargo unloaded immediately upon arrival, resulting in delayed delivery of the airlifted supplies and equipment. Furthermore, aircraft waiting to be unloaded are unable to depart the airfield and pick up cargo elsewhere, thus potentially delaying the delivery of that cargo as well. According to DOD, it has sent additional aerial port workers and materiel- handling equipment to Bastion and Mazar-e-Sharif, and additional port workers have been requested for Bagram, Farah, Shindand, and Kabul. However, we have not been able to evaluate the impact on cargo processing and aircraft servicing times at these locations. Restrictions at airfields outside Afghanistan and competing demands for available landing times in Afghanistan may also affect the delivery of supplies and equipment to U.S. forces. Because of their limited capability to park and unload aircraft, airfields must closely manage the number of aircraft that land each day in order to avoid exceeding capacity on the ground, and aircraft bound for Afghanistan must ensure that they have available time and space to land at the airfield prior to departing from their originating locations. In some cases, aircraft may not be able to land in Afghanistan during an available time because they are restricted from departing their original locations. For example, officials stated that aircraft departing from Ramstein Air Base in Germany cannot fly during certain hours of the day because of host nation policy—even though, in order to arrive at Bagram during certain available landing-time windows, it would be necessary for aircraft to depart Ramstein during prohibited flying hours. As a result, aircraft must postpone their departure from Ramstein and coordinate another available landing time at Bagram that can be reached by departing Ramstein during normal flying hours. Consequently, delivery of an aircraft’s cargo to the warfighter may be delayed, and the aircraft is not being fully utilized while it forfeits an available landing window and waits on the ground for a new departure time. An additional difficulty is the competition for available landing times in Afghanistan among U.S. and coalition airlift, passenger and cargo airlift, and inter- and intra-theater airlift. These numerous competing priorities cannot all be met simultaneously, which may result in delaying the delivery of U.S. or coalition cargo or personnel to Afghanistan. According to U.S. Central Command, to mitigate the effects of competing priorities, DOD is coordinating with coalition forces to establish a regional airspace control management organization that will manage landing slot times at airfields in Afghanistan. DOD’s visibility over surface movements of supplies and equipment into and around Afghanistan is limited, and this limitation may hinder its ability to effectively manage the flow of supplies and equipment into the logistics hubs and forward operating bases. Although requirements are in place and methods are being used to maintain some visibility over the contractors and shipments while in transit, DOD lacks full visibility over surface movements of cargo because of a lack of timely and accurate information on the location and status of materiel and transportation assets in transit. According to DOD policies, components must ensure that all shipments moving to, from, or between overseas locations, which would include shipping transit points and theater, are tagged to provide global in-transit visibility. In-transit visibility is provided using various methods, including active RFID tags attached to cargo containers or pallets, satellite tracking devices on trucks, and contractor reports. While visibility has been more consistently maintained on cargo transported via airlift, challenges remain with meeting requirements for visibility of surface-moved cargo. Because there are no U.S. military transportation units operating in the countries along the surface routes to Afghanistan, DOD must rely solely on in-transit visibility tools like RFID tags. However, these tools are not always effective in providing adequate visibility. For example, visibility over cargo being transported to Afghanistan along the Northern Distribution Network is limited because agreements with some countries, such as Russia and Uzbekistan, prevent the use of in-transit visibility systems like RFID technology along the routes, according to officials. Therefore, DOD must rely on reports provided by the contracted carriers to track and obtain information about cargo location. According to Central Command Deployment and Distribution Operations Center officials, there are challenges with getting carriers to submit accurate shipment reports in a timely manner. If carriers do not submit their shipment data to DOD, or if there is a delay in report receipt, DOD’s visibility of cargo as it moves along the Northern Distribution Network may be limited. With regard to cargo transported through Pakistan, visibility exists at the seaport of Karachi, where cargo is unloaded from ships and loaded onto contractors’ trucks for surface movement through Pakistan and into Afghanistan. While satellite technology is used to track unit equipment, RFID technology is used to maintain visibility over both unit and sustainment cargo. However, visibility provided by RFID tags becomes more sporadic once cargo moves out of the port and along the ground routes. RFID interrogators throughout Pakistan can provide DOD with the cargo’s RFID data and location if a truck passes within range of the interrogator. However, only a small number of these interrogators are along the ground routes between the port of Karachi and the borders with Afghanistan. Furthermore, since no requirements exist regarding the routes that drivers must take to the border crossings, a truck’s route may not fall within range of an RFID interrogator until it arrives at one of the border crossings into Afghanistan. In addition, occasional errors in data downloaded onto the tags may cause erroneous information about the cargo to be reported to DOD. For example, data on a pallet’s interim transit location may be incorrectly recorded as its final destination on the RFID tag. To mitigate these issues with electronic data tracking, DOD uses contract personnel to provide reports about shipments in transit through Pakistan. Contractors stationed at various points on the Pakistani routes provide real-time locality information on trucks transporting U.S. cargo that pass them. Officials reported that this has helped DOD collect more accurate information about asset locations and incidents along the routes. However, depending on the route taken, drivers may not always pass contractors’ stations, and information about a truck and its cargo may not be available until the truck arrives at the Afghan border crossing. Visibility over shipments of supplies and equipment is also limited within Afghanistan. Although policies are in place to maintain visibility of materiel being transported, they have not been fully implemented. DOD’s ability to track cargo locations using RFID technology is limited in Afghanistan because of a limited number of interrogators. Officials stated that to increase visibility over cargo transported within Afghanistan, all trucks that provide services under the Afghan Host Nation Trucking Contract are required to use satellite-based, location-tracking technology to track their movements over ground routes. However, officials told us that most host nation truck drivers in Afghanistan are deterred from using the required tracking system by concerns that insurgents may be able to track their locations and target their trucks. As a result, they disable the technology while transporting cargo. Officials noted that the percentage of truck drivers who comply with the requirement to use the tracking technology has increased over time, and they expect it will continue to rise as the drivers become more educated about the contract requirement and the system’s benefits. The lack of visibility over supplies and equipment transiting into and around Afghanistan causes inefficient management of the flow of incoming trucks to logistics hubs and forward operating bases. This may result in backlogs of trucks trying to access the bases and delays in customer receipt of cargo. Without adequate visibility, the arrival of trucks delivering cargo to bases cannot be effectively metered by DOD or contractors, resulting in long wait times at base entry control points. Because of space constraints, only a certain number of trucks can be allowed on a base at a time. If the available space is filled with incoming trucks, trucks awaiting entry onto the base must wait outside the base until space is available for them to enter. Officials stated that backlogs at Kandahar have resulted in drivers waiting up to 20 days to access the base. Even when a truck accesses the base, the lack of visibility over materiel being transported may continue to cause delays in the delivery of supplies and equipment. Because of minimal visibility over cargo location, customers awaiting delivery of a shipment may not be aware that their cargo has arrived at a base, which may cause delays in pickup of the cargo. At the logistics hub in Kandahar, if the customer is unable to retrieve the cargo in a timely manner—usually within hours—the driver must exit the base and repeat the entry process until there is room to unload cargo and the customer is available to receive it. Storage capacity at the primary logistics hubs is limited, and at times it is insufficient to manage the volume of inbound and outbound supplies and equipment moving into and around Afghanistan. While some mitigation plans are being implemented or are already in place to alleviate challenges with storage capacity and improve the flow of cargo, officials anticipate that there may be an ongoing lack of storage capacity as the number of troops deployed to Afghanistan and operations tempo continue to increase. For example, the confined operating space within the storage area at Bagram Airfield slows down the speed at which cargo can be processed. According to officials, outbound cargo storage yards at the base were temporarily shut down approximately 20 times for about 24 hours each time during periods of high operations tempo in the past year, because they could not receive outbound cargo until existing cargo was shipped out. Additionally, officials noted that cargo storage space at the Bagram logistics hub has decreased because of competing needs of expanding operations—for example, there is a need for more mail storage, and more airlift operations have required additional parking for aircraft. The limited storage space must further be shared among multiple coalition forces at some logistics hubs, creating competition for storage capacity and materiel-handling equipment. For example, at Kandahar, officials estimated that multiple coalition nations, such as the United States, Germany, and Great Britain, are sharing approximately 2 acres of storage space for cargo transitioning into and out of the base via air, causing some strain at times. Much of the unused surface area at Kandahar is uncleared terrain, making it unfeasible for storing cargo. Additionally, officials said that many units lack the appropriate materiel-handling equipment needed to move and store pallets and containers in and around the unfinished surfaces of Kandahar. These officials reported that as a result, they must share equipment, such as all-terrain forklifts, with other units and contractors, thereby further diminishing timely materiel-handling capability. Consequently, the limited availability of storage space, infrastructure, and materiel-handling equipment at the logistics hubs may hinder DOD’s ability to manage the flow of supplies and equipment associated with the ongoing troop increase. DOD is developing plans to expand storage capacity at logistics hubs in order to better manage the flow of incoming supplies and equipment and to efficiently distribute cargo to support the warfighter. However, these plans will not be completed in time to support the ongoing troop increase because of the logistical challenges of base expansion. Officials told us that there are many time-consuming steps in the expansion process: they must determine the owners of the land around the base, acquire the neighboring real estate, clear away mines in the surrounding areas, and obtain the supplies needed to complete the expansion. While DOD has begun to implement plans to mitigate challenges, officials stated that there are no “perfect solutions” to recurrent storage problems at the supply hubs. They anticipate that storage issues will continue, and significant improvement may not be realized as troops continue to deploy to Afghanistan and military operations continue to expand. For example, at Bagram, aerial port personnel have built structures that enable them to double-stack pallets of incoming cargo, and they have stored their flatbed trucks on the flight line in order to make more room for storing supplies and equipment in the cargo receiving and shipping yards. However, officials told us that storage capacity for both inbound and outbound cargo in Bagram’s storage yards remains limited. At Kandahar, officials said there are plans to establish a logistics base adjacent to the main base. In the first phase of the base’s two-phase development, U.S. forces will use interim storage yards for incoming cargo containers and vehicles, and a transshipment yard for U.S. cargo flowing through Kandahar on its way to another forward operating base. At the transshipment yard, truck drivers will unload cargo so it can be readied for movement to its final destination, thus eliminating the in-gating and customer pickup process at Kandahar, which can take many days. According to officials, phase one of the logistics base development is scheduled to be operational in April 2010, and the construction of the entire forward operating base is scheduled for completion in summer 2010. Officials stated that this expansion will help alleviate storage issues at Kandahar, allowing the United States to better prioritize cargo shipments and improve DOD’s ability to quickly issue supplies and equipment to the warfighter. These officials noted, however, that the logistics base will not yet be fully operational during the height of the troop increase. DOD experienced difficulties in synchronizing the arrival of units and their equipment in Afghanistan during the previous troop increase in 2009, and the synchronization of units and equipment will likely continue to be a challenge during the ongoing troop increase. Units arriving in Afghanistan typically receive the equipment they need to perform their mission from three primary sources: unit-owned equipment, such as individual weaponry that is either brought with them or shipped separately from their home stations; theater-provided equipment, such as retrograde equipment from Iraq; and new production equipment, such as the Mine Resistant Ambush Protected All-Terrain Vehicle. DOD’s complex task is to synchronize the arrival of units with the availability of their equipment, regardless of the source, to enable them to perform their mission as quickly as possible. However, according to Joint Sustainment Command- Afghanistan, the 2009 troop increase resulted in significant backlogs of equipment transported on the Pakistani surface routes and by airlift, leaving some units in southern Afghanistan waiting for as long as several months to receive the theater-provided equipment necessary to conduct their mission. As of December 2009, no unit deployed to southern Afghanistan during the troop increase in the spring and summer of 2009 had yet received all of the theater-provided equipment it was suppose to be issued. Additionally, officials stated that DOD underestimated the amount of time required to install vehicles with sensitive items and ensure that they received necessary maintenance prior to their being delivered to the warfighter. As a result, some U.S. forces arrived at their forward operating base or combat outpost without the vehicles necessary to perform their mission. Given the numerous challenges we have identified in delivering supplies and equipment to U.S. forces in Afghanistan, we believe that DOD will likely face the same difficulties in synchronizing the arrival of units and equipment during the ongoing troop increase. For example, one unit deployed in Afghanistan reported in a January 2010 readiness report that it did not receive all of its equipment from its home station and had to perform an upcoming mission despite not having all military equipment available. Another reported that it lacked mission-essential equipment, such as bomb-disabling robots that were vital to protect soldiers from improvised explosive devices they encountered while conducting their mission. Another unit reported that it had arrived in theater in December 2009 and was still awaiting provision of theater-provided equipment as of January 2010. While DOD has taken steps to improve the synchronization of units and their equipment during the ongoing troop increase, at the time of our review, these steps were just being implemented and we were therefore unable to evaluate their effectiveness. At bases throughout Afghanistan, a lack of centralized coordination coupled with different and competing demands and priorities between U.S. and coalition forces may delay the delivery of supplies and equipment to U.S. forces. Additionally, limited processing and cargo-receiving capabilities may delay the delivery of supplies and equipment to U.S. forces. As aircraft carrying supplies and equipment land at coalition airfields, or host nation trucks arrive at entry control points with shipments for multiple coalition forces, logistics personnel at those locations have a limited ability to manage and prioritize the flow of all troops’ cargo. Specifically, officials at Kandahar told us that they had waited for days to receive shipments of priority materiel that were waiting outside the base to be processed for entry onto the base, along with other coalition forces’ cargo, because the coalition commander of Kandahar would not allow the U.S. forces’ cargo to be prioritized to enter first at the control point. However, the officials noted that the planned construction of a U.S. logistics base adjacent to the existing coalition-run base will improve DOD’s ability to manage and prioritize the flow of supplies and equipment and store cargo at Kandahar. In addition, coalition forces compete for limited amounts of materiel- handing equipment and storage facilities. Officials stated that when materiel-handling equipment, such as forklifts, is unavailable or unserviceable, coalition forces have to share what limited equipment is available to conduct supply operations. Because units sometimes have to wait to use the available materiel-handling equipment, supply delivery to U.S. troops may be delayed. Officials did note that efforts to share space have improved over the past year, indicating that coalition forces are better coordinating their operations to fulfill the mission in Afghanistan. However, there is the potential for a future increase in the number of coalition forces in Afghanistan, which could exacerbate the challenges we have identified. DOD’s reliance on contractors to support its operations in Afghanistan creates additional challenges with regard to the distribution of supplies and equipment, as well as movement of contractor personnel. Contractors have become an indispensable part of the force, performing a variety of functions in Afghanistan, such as communication services, provision of interpreters who accompany military patrols, base operations support (e.g., food and housing), weapons systems maintenance, and intelligence analysis. DOD estimated that about 104,000 contractor personnel were supporting operations in Afghanistan as of September 2009. Further, DOD anticipates that this number will grow as it increases troop presence in Afghanistan. As we have previously reported, troop increases typically include increases in contractor personnel to provide support. These contractors in Afghanistan rely on the same distribution routes and methods as do the military forces to deliver the supplies and equipment they need to perform their mission and sustain their operations. However, DOD’s ability to manage the flow of materiel for contractors and military personnel into logistics hubs and forward operating bases, and balance the use of limited transportation assets and storage capacity between contractors and military personnel, may be hampered by its lack of good information on the number of current contractors and lack of good planning for the coming increase in both contractors and their requirements. These requirements include contractor access to materiel- handling equipment and storage space for the supplies and equipment contractors need to perform their mission as well as for life support, such as housing and food. Since 2003, we have reported that DOD lacked reliable data on the number of contractor personnel providing services in environments such as Afghanistan, and our work has found that DOD’s current system for collecting data on contractor personnel in Afghanistan does not provide accurate data. Further, during our December 2009 trip to Afghanistan, we found that there was only limited planning being done with regard to contracts or contractors. Specifically, we found that with the exception of planning for the increased use of the Logistics Civil Augmentation Program, U.S. Forces-Afghanistan had not begun to consider the full range of contractor services that might be needed to support the planned increase of U.S. forces. More importantly, the command appeared to be unaware of its responsibility to determine contracted support requirements or develop the contract management and support plans required by guidance. However, we did find some being done by U.S. military officials at Regional Command–East. According to planners from Regional Command–East, the command had identified the types of units that were deploying to its operation al area in Afghanistan and was coordinating with similar units already in Afghanistan to determine what types of contract support the units relied on. Nonetheless, without a complete picture of the number of contrac in Afghanistan and their materiel requirements, DOD may not be in a position to effectively manage the flow of military and contractor cargo to ensure that all materiel is delivered to the right locations at the right timeto enable both military units and contractors to perform their missions. Another challenge with regard to contractors is the timely movement o their people and supplies around Afghanistan. When traveling around Afghanistan, contractor personnel and their equipment are given a low priority for air transportation as compared with military personnel an d materiel, and that prioritization can affect the contractors’ ability to perform their contracts. Contractor personnel have difficulty obtaining military airlift within Afghanistan, and they spend lengthy amounts o in passenger terminals hoping to catch the first available flight. For example, according to contractor personnel we spoke with, they fly military airlift at the lowest priority for seats on flights. A letter from a military commander is needed in order to fly with a higher priority—a obtaining one takes considerable time and effort. According to these contractor personnel, the time they spend waiting in passenger terminal can cost the U.S. government both in money paid and lost productivi ty. Officials from several contractors told us that they factor additional personnel into their workforce structures because of the difficulties i getting people to and from their work sites. The difficulty in moving contractor personnel and equipment may be compounded when the increase begins. While some efforts are under way to improve key infrastructure, such as passenger terminals, it may still take time to complete these projects. Currently, the passenger terminals in key airlift hubs such as Kandahar and Bagram are very small, and passengers m experience long wait times between their arrival in the terminal and boarding their flights. Without a rapid expansion of these facilities, it is likely that this overcrowding will be compounded by the troop increase During our visit we spoke with multiple people, including military and . contractor personnel, who had waited anywhere from a few days week to board a flight. In addition to the efforts described above to mitigate each of the challenges we have identified, DOD is also working to address them through planning conferences intended to synchronize the flow of for ces into Afghanistan. For example, in December 2009 and January 2010, U.S. Central Command sponsored two conferences to (1) identify unit equipment available to deploy in support of the troop increase; (2) a ways in which distribution challenges could be overcome in order to deploy the troops and their required supplies and equipment by August 2010; and (3) plan for the simultaneous drawdown of forces and equipment from Iraq. Officials from key organizations across DOD, including U.S. Transportation Command, U.S. Forces-Afghanistan, U.S. Forces-Iraq, and Army Central Command, attended both Throughout both conferences, DOD officials stressed the need to balance and closely coordinate multiple requirements in order to sustain current operations in Afghanistan and Iraq, draw down forces and equipment f Iraq, and increase forces and equipment in Afghanistan. conferences. Because of the unique challenges of Afghanistan, the movement of supplies and equipment in support of operations there is likely to be the most complex logistics operations the U.S. military has undergone in recent history. The challenges are daunting, and the transportation system is heavily strained in maintaining current operations. Now, with the addition of 30,000 more U.S. troops on the horizon, coupled with an increase in contractors and a potential increase in coalition forces, these challenges will only be magnified, and a system that is struggling to keep pace with current operations could be further strained. It will, therefo re, one of be critical for DOD to develop adequate contingency plans to mitigate the effects of these and other unforeseen challenges, and to react quickly to overcome significant problems as they occur. Failure to effectively manage the flow of materiel could delay combat units’ receipt of the critical items they need to perform their mission, and costly backlogs of cargo could accumulate throughout the supply system, risking loss of accountability and control over billions of dollars in assets. We expect to report more fully on these and other issues at a later date. For further information about this statement, please contact William M. Solis at (202) 512-8365 or solisw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Cary Russell, Assistant Director; Vincent Balloon; John Bumgarner; Carole Coffey; Melissa Hermes; Lisa McMillen; Geoffrey Peck; Bethann Ritter; Michael Shaughnessy; Sarah Simon; Angela Watson; Cheryl Weissman; Stephen Woods; and Delia Zee. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In fiscal year 2009, the Department of Defense (DOD) reported that it spent $4 billion to move troops and materiel into Afghanistan, a mountainous, arid, land-locked country with few roads, no railway, and only four airports with paved runways over 3,000 meters. The terrain and weather in Afghanistan and surrounding countries pose further challenges to transporting supplies and equipment. In December 2009, the President announced that an additional 30,000 U.S. troops will be sent to Afghanistan by August 2010. Today's testimony discusses GAO's preliminary observations drawn from ongoing work reviewing DOD's logistics efforts supporting operations in Afghanistan, including (1) the organizations involved and routes and methods used to transport supplies and equipment into and around Afghanistan; (2) steps DOD has taken to improve its distribution process, based on lessons learned from prior operations; and (3) challenges affecting DOD's ability to distribute supplies and equipment within Afghanistan, and its efforts to mitigate them. In conducting its audit work, GAO examined DOD guidance and other documentation relating to the processes of transporting supplies and equipment to Afghanistan and met with various cognizant officials and commanders in the United States, Afghanistan, Kuwait, and Qatar. Movement of supplies and equipment into and around Afghanistan is a complex process involving many DOD organizations and using air, sea, and ground modes of transportation. DOD's ability to provide timely logistics support to units deploying to Afghanistan or already in theater depends on its ability to synchronize all of these activities into one seamless process. For example, U.S. Transportation Command manages air and surface transportation from the United States to and around the U.S. Central Command area of operations; U.S. Central Command's Deployment and Distribution Operations Center validates and directs air movements and monitors and directs surface movements within theater; the Air Force's Air Mobility Division assigns and directs aircraft to carry materiel within the theater; and the Army's 1st Theater Sustainment Command monitors strategic movements of materiel and directly influences movements into theater. Most cargo in theater is transported commercially by ship to Pakistan and then by contractor-operated trucks to Afghanistan, but high-priority and sensitive items are transported by U.S. military and commercial aircraft directly from the United States and other countries to logistics hubs in Afghanistan. DOD has taken some steps to improve its processes for distributing materiel to deployed forces based on lessons learned from prior operations. For example, in response to lessons learned from problems with keeping commanders informed about incoming materiel in Operation Iraqi Freedom, U.S. Transportation Command established the Central Command Deployment and Distribution Operations Center, which now helps coordinate the movement of materiel and forces into the theater of operations. Also, since GAO reported in 2003 that radio frequency identification tags were not being effectively used to track materiel in transit to, within, and from Iraq, DOD developed policies and procedures to increase tag use on cargo traveling through the U.S. Central Command theater of operations, including Afghanistan. Challenges hindering DOD's ability to distribute needed supplies and equipment to U.S. forces operating in Afghanistan include difficulties with transporting cargo through neighboring countries and around Afghanistan, limited airfield infrastructure, lack of full visibility over cargo movements, limited storage capacity at logistics hubs, difficulties in synchronizing the arrival of units and equipment, lack of coordination between U.S. and other coalition forces for delivery of supplies and equipment, and uncertain requirements and low transportation priority for contractors. DOD recognizes these challenges and has ongoing or planned efforts to mitigate some of them; however, some efforts involve long-term plans that will not be complete in time to support the ongoing troop increase. DOD is also working to address these challenges through planning conferences to synchronize the flow of forces into Afghanistan. At these conferences, DOD officials stressed the need to balance and coordinate multiple requirements in order to sustain current operations in Afghanistan and Iraq, draw down forces and equipment in Iraq, and increase forces and equipment in Afghanistan.
Antipsychotic drugs are classified into two sub-groups. The first group, or generation, of antipsychotic drugs—also known as “conventional” or “typical” antipsychotic drugs—was developed in the mid-1950s. Examples include haloperidol (Haldol®) and loxapine (Loxitane®). The second generation of antipsychotic drugs, known as “atypical” antipsychotics, was developed in the 1980s. Examples include aripiprazole (Abilify®) and risperidone (Risperdal®). Atypical antipsychotics became more popular upon their entry into the market due to the initial belief that these drugs caused fewer side effects than the conventional antipsychotics. Each antipsychotic drug has its own set of FDA-approved indications. The vast majority of antipsychotic drugs are FDA-approved for the treatment of schizophrenia, and most atypical antipsychotic drugs are FDA- approved for the treatment of bipolar disorder. In addition, some antipsychotics are FDA-approved for the treatment of Tourette syndrome. CMS guidance to state nursing home surveyors also recognizes antipsychotics as an acceptable treatment for conditions for which the drugs have not been FDA-approved, such as for the treatment of Huntington’s disease. In 2005, FDA recognized the risks associated with atypical antipsychotic drugs and required those drugs to have a boxed warning, citing a higher risk of death related to use among those with dementia. In 2008, FDA recognized similar risks for conventional antipsychotic drugs and required the same boxed warning. Besides the risks described in the boxed warning, use of antipsychotic drugs carries risks of other side effects, such as sedation, hypotension, movement disorders, and metabolic syndrome issues. Clinical guidelines consistently suggest the use of antipsychotic drugs for the treatment of the behavioral symptoms of dementia only when other, non-pharmacological attempts to ameliorate the behaviors have failed, and the individuals pose a threat to themselves or to others. For example, AMDA–The Society for Post-Acute and Long-Term Care Medicine suggests first assessing the scope and severity of the behavior and identifying any environmental triggers for the behavior. A medical evaluation may determine whether the behavioral symptoms are associated with another medical condition, such as under-treated arthritis pain or constipation. In its clinical guideline, AMDA cited conflicting evidence surrounding the effectiveness of antipsychotic drugs in treating the behavioral symptoms of dementia.found significant improvement in symptoms with the treatment of certain atypical antipsychotic drugs, but also noted that other reviews signaled there were no significant differences attributable to atypical antipsychotic drugs. It noted one evidence review that Other non-pharmacological interventions that can be attempted prior to the use of antipsychotic drugs may focus on emotions, sensory stimulation, behavior management, or other psychosocial factors. An example of an emotion-oriented approach is Reminiscence Therapy, which involves the recollection of past experiences through old materials with the intention of enhancing group interaction and reducing depression. An example of a sensory stimulation approach is Snoezelen Therapy, which typically involves introducing the individual to a room full of objects designed to stimulate multiple senses, including sight, hearing, touch, taste, and smell. This intervention is based on the theory that behavioral symptoms may stem from sensory deprivation. A 2012 white paper published by the Alliance for Aging Research and the Administration on Aging, a part of the ACL, noted that advancements have been made with regards to the evidence base supporting some non- pharmacological interventions, but that evidence-based interventions are Experts referenced in the white paper identified not widely implemented.the need for clearer information about the interventions, such as a system to classify what interventions exist and who might benefit from those interventions. Experts also noted that additional research is needed to develop effective interventions. Federal law requires nursing homes to meet federal quality and safety standards, set by CMS, to participate in the Medicare and Medicaid programs. CMS regulations require nursing homes to ensure that residents’ drug therapy regimens are free from unnecessary drugs, such as medications provided in excessive doses, for excessive durations, or Nursing facility staff must assess without adequate indications for use.each resident’s functional capacity upon admission to the facility and periodically thereafter, and provide each resident a written care plan. Based on these assessments, nursing homes must ensure that antipsychotics are prescribed only when necessary to treat a specific condition diagnosed and documented in the patient’s record, and that residents who use antipsychotic drugs receive gradual dose reductions and behavioral interventions, unless clinically contraindicated. Part of the nursing home survey process, otherwise known as nursing home inspections, involves audits of these care plans and assessments. About one-third of older adult Medicare Part D enrollees with dementia who spent over 100 days in a nursing home were prescribed an antipsychotic drug in 2012. Among those Medicare Part D enrollees with dementia who spent no time in a nursing home in 2012, we found that about 14 percent were prescribed an antipsychotic. In total, Medicare Part D plans paid roughly $363 million in 2012 for antipsychotic drugs prescribed for older adult Medicare Part D enrollees with dementia. We found that about 33 percent of Medicare beneficiaries with dementia who were enrolled in a Part D plan and had a long stay in a nursing home—defined as over 100 cumulative days—were prescribed an antipsychotic in 2012. (See table 1.) We also found that prescribing rates for Medicare Part D enrollees with dementia who were nursing home residents varied somewhat by resident characteristic: Male enrollees were slightly more likely to have been prescribed an antipsychotic drug than female enrollees—about 36 percent and 32 percent, respectively. The prescribing rate declined as Medicare Part D enrollee age increased. For example, about 41 percent of those Medicare Part D enrollees aged 66 to 74 received an antipsychotic prescription, compared to 29 percent of those enrollees aged 85 and older who were prescribed an antipsychotic drug. The prescribing rate for antipsychotic drugs was highest for enrollees in the South, and lowest for enrollees in the West. We found slightly lower rates of antipsychotic drug prescribing when we restricted our analysis to those enrollees with three or more 30-day supply prescriptions during 2012. Specifically, about 28 percent of long- stay Medicare Part D enrollees with dementia were given three or more 30-day supply prescriptions for an antipsychotic drug over the course of 2012. We also found that the majority of prescriptions given to those long- stay Medicare Part D enrollees with dementia—about 68 percent—were for seven or more 30-day supplies of the drug, while only 3 percent were for less than one 30-day supply. Consistent with the findings for Medicare Part D enrollees, our analysis of MDS data showed that approximately 30 percent of all older adult nursing home residents—regardless of enrollment in Medicare Part D—with a dementia diagnosis were prescribed an antipsychotic drug at some point during their 2012 nursing home stay. (See fig. 1.) Residents with dementia accounted for a significant proportion of all nursing home residents. In 2012, about 38 percent, or almost 1.1 million of the 2.8 million nursing home residents that year, were diagnosed with dementia. Examining this more comprehensive database of nursing home residents also allowed us to compare the antipsychotic drug prescribing rates of long-stay residents and short-stay residents—those residents who spent 100 days or less in the nursing home. The proportion of residents diagnosed with dementia who were prescribed an antipsychotic drug was greater for long-stay residents than for short-stay residents (about 33 percent versus 23 percent, respectively). (See table 2.) Variation in prescribing rates across resident characteristics was similar to the variation found in the Medicare Part D enrollee long-stay nursing home population. Of those Medicare Part D enrollees with dementia in settings outside of the nursing home, about one in seven (14 percent) were prescribed an antipsychotic. (See fig. 2.) Roughly 1.2 million of the 20.2 million older adult Medicare Part D enrollees living outside of a nursing home in 2012 had a diagnosis of dementia—just above 6 percent. The rate of antipsychotic drug prescribing among older adult Medicare Part D enrollees with dementia was lower for those living outside of nursing homes, compared to those living in nursing homes, given that residents of nursing homes are generally sicker than those living outside of nursing homes. We also found that the pattern of variation in antipsychotic drug prescribing for Medicare Part D enrollees outside of a nursing home for certain characteristics was different from the pattern of variation found in the nursing home population. The proportion of Medicare Part D enrollees outside of nursing homes diagnosed with dementia who were prescribed an antipsychotic drug was higher for older enrollees—the opposite of the pattern found in the nursing home setting. (See table 3.) The prescribing rate was also higher for female enrollees outside of the nursing home than for male enrollees, whereas the opposite was true in the nursing home setting. The prescribing rate for enrollees with dementia outside of the nursing home changed less depending on enrollee location than those in nursing homes. We found slightly lower rates of antipsychotic drug prescribing for Medicare Part D enrollees outside of the nursing home when we restricted our analysis to those enrollees with three or more 30-day supply prescriptions. Specifically, about 11 percent of enrollees outside of the nursing home received three or more prescriptions for antipsychotic drugs over the course of 2012. About 58 percent of antipsychotic prescriptions for Medicare Part D enrollees with dementia living outside of a nursing home were for seven or more 30-day supplies of the drug, while only 3 percent were for less than a 30-day supply. Medicare Part D plans paid roughly $363 million in 2012 for antipsychotic drugs used by Medicare Part D enrollees with dementia aged 66 and older. (See table 4.) Medicare Part D spending on antipsychotic drugs for Medicare Part D enrollees living outside of a nursing home with a dementia diagnosis totaled almost $171 million in 2012, the same as spending for long-stay nursing home enrollees with dementia. Payments for short-stay nursing home enrollees may be low because often Medicare Part A covers drugs administered during short, post-acute stays in nursing homes. Medicare Part D plans consistently spent more than double on antipsychotic prescriptions for female enrollees than for male enrollees; as reported in table 1, the number of female Medicare Part D enrollees using antipsychotic drugs was also over two times that of males. Internal medicine, family medicine, and psychiatry or neurology physicians prescribed the greatest proportion of antipsychotic drug prescriptions for older adult Medicare Part D enrollees with dementia— about 82 percent in total. Antipsychotic drugs prescribed by these specialties also made up about 82 percent of the Medicare Part D plan payments for antipsychotic drugs—almost $298 million in plan payments. Antipsychotic prescriptions from internal medicine physicians comprised 36 percent of Medicare Part D plan payments for antipsychotic drugs, while family medicine and psychiatry or neurology prescriptions comprised about 30 and 16 percent, respectively. Nurse practitioner and physician assistant prescriptions collectively accounted for almost 5 percent of antipsychotic drug claim payments, while the remaining 13 percent encompassed many specialties. Quetiapine Fumarate, Risperidone, and Olanzapine were the most commonly prescribed antipsychotic drugs for older adult Medicare Part D enrollees with dementia in 2012, comprising approximately $246 million in plan payments. (See table 5.) Haloperidol and Aripiprazole were also commonly prescribed; these two drugs were prescribed to almost 9 and 6 percent of Medicare Part D enrollees with dementia, respectively. Experts we spoke with and research we reviewed commonly identified certain factors that are specific to the patient that contribute to antipsychotic prescribing, such as patient agitation or delusions. Experts and research also identified certain contributing factors that are specific to settings, such as to nursing homes or hospitals. The majority of experts we spoke with and some research articles we reviewed highlighted agitation, aggression, or exhibiting a risk to oneself or others as factors that contribute to the decision to prescribe antipsychotics. For example, in a study examining the MDS from 1999 to 2006 in eight states, 51 percent of aggressive nursing home residents diagnosed with dementia were prescribed antipsychotic drugs in 2006, as opposed to 39 percent of residents with behavioral symptoms but who were not aggressive during that same time period. The study suggested that aggressive residents may have been more likely to be prescribed antipsychotics because of the greater risk of injury associated with the aggressive behavior. This is consistent with findings from our analysis of nursing home assessment data; we found that, of residents diagnosed with dementia and documented as being a risk to themselves or others, 61 percent had an antipsychotic drug prescription in 2012. Many experts we interviewed identified other situations that may warrant the use of antipsychotics despite their risk, such as patients experiencing frightening delusions or hallucinations that cause the patient to act out in ways that may be violent or harmful. Several experts noted that individuals experiencing these psychotic and other behaviors may be suffering from distress and are more likely to be prescribed antipsychotic drugs to ease their distress and improve their quality of life. For example, individuals may injure themselves or strike another resident or staff member because of delusions that these people intend to kill them. A few research articles identified psychotic behaviors as a contributing factor. For instance, one study that examined medical records of more than 200 nursing home residents with dementia found that 47 percent of residents who were on an antipsychotic also had a diagnosis of psychosis. The research we reviewed also cited other specific patient characteristics associated with higher antipsychotic use in dementia patients. Patient characteristics such as age, gender, race or ethnicity, and psychiatric diagnoses were associated with higher antipsychotic prescribing in several articles. For example, in one study of nursing home assessments and Medicaid drug claims from seven states, researchers found that nursing home residents with psychiatric co-morbidities, such as anxiety and depression without psychosis, were more likely to be prescribed Male gender was also mentioned as a patient antipsychotic drugs. characteristic associated with higher antipsychotic prescribing in three research articles. In our analyses of 2012 Medicare data, males had a higher prescribing rate in the nursing home, while females had a higher rate outside of the nursing home. Finally, one article found that black nursing home residents were more likely to be prescribed antipsychotic drugs, while another article found that black residents were less likely to receive them when compared to white residents. Experts and research identified factors within the setting that an individual visits or resides in, such as nursing homes or hospitals, as contributing to the decision to prescribe antipsychotic drugs to older adults. Among nursing homes, experts and research cited factors, including the culture of the facility, the level of staff training and education, and the number of staff at the nursing home, as contributing to the decision to prescribe antipsychotic drugs to older adults. Specifically, nursing home leadership—such as administrators and medical directors—and culture were cited by half of the experts and two of the research articles. An expert told us that when the leadership of the nursing home believes it is broadly acceptable to provide antipsychotic drugs to residents with dementia, this belief spreads throughout the facility. One study examining variation in antipsychotic use in nursing homes looked at the pharmacy claims and nursing home assessments of more than 16,000 residents in 1,257 nursing homes.admitted to facilities with high antipsychotic prescribing rates were 1.4 times more likely to receive antipsychotics, even after controlling for patient-specific factors. The study found that new nursing home residents In addition to nursing home culture and leadership, many experts and two research articles identified staff or prescriber education and training on antipsychotic prescribing for individuals with dementia as affecting antipsychotic drug prescribing. One industry group we spoke with indicated that physician training specifically regarding older adults with dementia in nursing homes and knowledge of related federal regulations are often lacking. Similarly, a study in 68 nursing homes in Connecticut examining knowledge of nursing home leaders and staff, who often set the tone for prescribing antipsychotic drugs and observing patients’ behavioral symptoms, found most of the certified nursing assistants— 96 percent—were not aware of the serious risks to residents that can result from antipsychotic use. The study also found that 56 percent of direct-care staff believed medications worked well to manage resident behavior. Another article reported that antipsychotic drug prescribing for individuals with dementia decreased from 20.3 to 15.4 percent in one nursing home after the implementation of an educational in-service training designed to reduce the inappropriate use of antipsychotic prescribing and increase documentation of non-pharmacological interventions.factor that can contribute to minimizing unnecessary antipsychotic prescribing. One provider group noted that, in order to reduce antipsychotic use, a facility would need to invest in professional training for staff in a way that provides information about adequate alternatives to antipsychotic drugs. In expert interviews, education of staff was identified as a Nursing home staffing levels, specifically low staff levels, were also cited as a contributing factor to antipsychotic drug use in one research article and by a few experts. For example, one study examined more than 5,000 nursing homes and 561,000 residents by linking 2009 and 2010 prescription drug claims to the Nursing Home Compare database to identify a nationwide pattern of antipsychotic drug use. The study found the nursing homes with the highest quintiles of antipsychotic drug use had significantly less staff than those with the lowest quintiles. An expert group noted that nursing homes with less staff may not have enough activities and oversight for the patients, which in turn may make the nursing home residents susceptible to higher antipsychotic drug use. In addition, the majority of experts we spoke with told us that entering a nursing home from a hospital is a factor leading to higher antipsychotic prescribing in the nursing home. These experts agreed that antipsychotic drugs are often initiated in hospital settings and carried over to nursing home settings. One industry group we spoke with noted that individuals with dementia go to the hospital frequently and can be prescribed an antipsychotic drug if they exhibit disruptive behavior. Another industry group attributed the actual prescribing of antipsychotic drugs to hospital care culture and stated that the prescribing of antipsychotics is a common practice in hospitals for treating individuals with dementia. A research study that examined the medical charts of 73 residents in seven nursing homes found 84 percent of the residents that had been admitted to the nursing home from the hospital were admitted on at least one psychoactive medication—including antipsychotics. Finally, experts we spoke with indicated that caregivers’ frustration with the behavior of individuals with dementia can lead to requests for antipsychotic drugs. For example, an advocacy group we spoke with mentioned that a caregiver may request an antipsychotic drug for an individual with dementia in an effort to keep them in the home. The individual with dementia may not recognize their relative, which can cause them agitation. To keep the individual calm so that they can stay in the home and not be placed in a nursing home, an antipsychotic medication may be prescribed. Representatives from another provider group explained that when an individual with dementia has an unmet need, they may also appear to be in distress, which may cause the caregiver to become frustrated because they do not know how to relieve this distress. HHS agencies, including CMS, AHRQ, and NIH, have taken actions to address antipsychotic drug use by older adults with dementia in nursing homes. However, HHS has done little to address antipsychotic drug use among older adults with dementia living in settings outside of the nursing home. Under the National Plan to Address Alzheimer’s Disease, HHS has a goal to expand support for people with Alzheimer’s disease and their families with emphasis on maintaining the dignity, safety, and rights for those suffering from this disease. To reach this goal, HHS outlined several actions, including monitoring, reporting, and reducing the use of antipsychotics drugs by older adults in nursing homes. CMS has taken the lead in carrying out this work. Other HHS agencies have also done work related to reducing antipsychotic drug use in nursing homes. In 2012, CMS launched the National Partnership to Improve Dementia Care in Nursing Homes with federal and state agencies, nursing homes, providers, and advocacy organizations. This was in response to several reports dating back to 2001 published by the HHS Inspector General and advocate concerns about the persistently high rate of antipsychotic drug use and quality of care provided to nursing home residents with dementia. The National Partnership began with an initial goal of reducing the national prevalence of antipsychotic drug use in long-stay nursing home residents by at least 15 percent by December 31, 2012. CMS used publicly reported measures from the Nursing Home Compare website to track the progress of the National Partnership and, according to officials, to reach out to those states and individual facilities with high prescribing rates. In the fourth quarter of 2011, which was deemed the baseline, 23.8 percent of long-stay nursing home residents nationwide were prescribed an antipsychotic drug. While the National Partnership did not reach its target reduction in 2012, by the end of 2013 the national use rate decreased to 20.2 percent, a 15.1 percent reduction. The majority of states showed some improvements in their rates; however some states showed much more improvement than others. For example, Delaware showed a 27 percent reduction—from 21.3 to 15.5 percent—in the prevalence of antipsychotic drug use from 2011 through 2013, while Nevada saw a smaller reduction of 2.7 percent—from 20.3 to 19.7 percent—during the same period. The National Partnership is working with state coalitions, as well as nursing homes to reduce this rate even further. In September 2014, CMS established a new set of national goals to reduce the use antipsychotic drugs in long-stay nursing home residents by 25 percent by the end of 2015 and 30 percent by the end of 2016, which, assuming a baseline of 23.8 percent, would lead to a prescribing rate of 16.7 percent. Beginning in January 2015, CMS’s Five- Star Quality Rating System for nursing homes will be based, in part, on this measure of the extent to which antipsychotic drugs are used in the nursing home. The Five-Star Quality Rating System provides a way for consumers to compare nursing homes on the Medicare Web site. Previously, the measure was displayed, but not included in the calculation of each nursing home’s overall quality score. Person-centered care is an approach to care that focuses on residents as individuals and supports caregivers working most closely with them. It involves a continual process of listening, testing new approaches, and changing routines and organizational approaches in an effort to individualize and de-institutionalize the care environment. Medicare beneficiaries, and Advancing Excellence in America’s Nursing Homes Campaign, a major initiative of the Advancing Excellence in Long Term Care Collaborative. The National Partnership includes regular conference calls with states, regions, and advocates, and presentations by experts in the field, to share best practices and brainstorm ways to improve dementia care in their facilities. In addition, CMS has taken four additional actions that aim to reduce antipsychotic drug use among older adults in nursing homes: CMS provided additional guidance and mandatory training around behavioral health and dementia care from 2012 through 2013 to the state surveyors responsible for reviewing and assessing nursing homes. This was done in order to improve surveyors’ ability to identify the use of unnecessary drugs, including inappropriate use of antipsychotic drugs. QIOs have focused some of their efforts on reducing antipsychotic drug use in nursing homes. For example, beginning in 2013, the QIOs provided training to nearly 5,000 nursing homes on the appropriate use of antipsychotic medications. CMS recently concluded pilots of a new dementia-focused survey that examines the use of antipsychotic drugs to older adults with dementia living in nursing homes. CMS reported that the focused survey pilot results will allow the agency to gain new insight about the current survey process, including how the process can be streamlined to more efficiently and accurately identify and cite deficient practices as well as to recognize successful dementia care programs. The pilot consisted of onsite, targeted surveys of dementia care practices in five nursing homes in each of five states. CMS began reporting the rate of chronic use of atypical antipsychotic drugs by older adult Medicare beneficiaries living in nursing homes for This information is publicly available Medicare Part D plans in 2013.on the Medicare Part D Compare Website, which is used by Medicare beneficiaries comparing Medicare Part D plans. The measure used for Medicare Part D plans differs in a few respects from the measure used to assess nursing homes. First, the Medicare Part D measure examines chronic use, defined as having at least 3 months or more of a prescription for an atypical antipsychotic drug, whereas the nursing home measure includes any use. Additionally, the Medicare Part D measure only includes atypical antipsychotic drugs, compared to the nursing home measure, which includes all antipsychotic drugs. Of the 421 Medicare Part D plans reporting in 2012, the rate of use among Medicare Part D enrollees residing in nursing homes ranged from 0 to almost 64 percent. The average among all Medicare Part D plans in 2012 was approximately 22 percent of enrollees residing in nursing homes having at least 3 months or more of a prescription. CMS told us that variation in antipsychotic prescribing among Medicare Part D plans may be explained by the prescribing practice in the plan’s service area, nursing home willingness to allow the use of antipsychotic drugs for the behavioral symptoms of dementia, resident need, and success in implementing interventions to reduce the inappropriate use of antipsychotic drugs. In addition to CMS actions, AHRQ and NIH have awarded research grants for work related to antipsychotic drug use by older adults with dementia in nursing homes. AHRQ has funded individual grants for work related to antipsychotic drug use in nursing homes through its Center for Evidence and Practice Improvement and the Centers for Education & Research on Therapeutics (CERT) program. For example, in 2011, CERT funded several project centers for a 5-year period to study a broad range of health care issues, including Rutgers University, which studied patterns of antipsychotic drug use, along with the safety and effectiveness of antipsychotic drug use for individuals living in nursing homes. Within the NIH, the National Institute on Aging and the National Institute of Mental Health have also funded related research, including a number of studies examining the safety of antipsychotic drugs in older adults. Some stakeholders and other provider groups we spoke with expressed overall support of HHS’s efforts, while others cautioned that the emphasis should not curtail access to those individuals who need antipsychotic drugs. Specifically, stakeholders indicated that the collaboration between public and private organizations, as part of the National Partnership, along with the sharing of practices aimed at reducing antipsychotic drug use, contributed to the campaign’s success. Stakeholders also mentioned that the National Partnership allowed nursing homes to pay attention and start talking about issues related to antipsychotic drug use. Some stakeholders further indicated that HHS’s initiatives have brought focus to the issue of antipsychotic drug use in older adults in nursing homes. Conversely, other groups and individuals involved in HHS’s efforts expressed concern that the emphasis on reducing antipsychotic drug use in nursing homes could result in some individuals who need these medications not receiving them. One researcher we spoke with noted that because nursing homes’ use of antipsychotic drug use is measured and publicly reported, these facilities may be worried about their antipsychotic drug rate and focus on the bottom-line number instead of what is good for the individual. CMS officials told us that they are careful in their messaging to acknowledge that antipsychotic drugs have a useful prescribing purpose and therefore will never be totally eliminated. They are working with providers to develop a comprehensive view of what a patient potentially needs, emphasizing that using antipsychotic drugs should not be the first-line intervention. While the National Alzheimer’s Plan was established to improve care for all individuals with dementia regardless of the setting where they reside, HHS efforts related to reducing antipsychotic drug use among older adults have primarily focused on those living in nursing homes with less activity geared toward those living outside of nursing homes. HHS officials noted that the focus has been on reducing antipsychotic drug use rates in nursing homes for a variety of reasons, including the severity of dementia among nursing home residents and the agency’s responsibility to ensure appropriate training of nursing home staff. However, the risk of antipsychotic drugs to older adults is not specific to those in nursing homes. Furthermore, we found that 1 in 7 Medicare Part D enrollees with dementia outside of the nursing home were prescribed an antipsychotic drug in 2012. We identified one activity by HHS’s ACL that examined a topic related to the use of antipsychotic drugs, specifically the use of non- pharmacological interventions in the treatment of individuals with dementia. In 2012, ACL partnered with a research group to conduct a study on non-pharmacological treatments and care practices for individuals with dementia and their caregivers. The study results were presented in a white paper and disseminated on the ACL’s Web page. ACL also included the study results in a newsletter distributed to state organizations on aging. ACL officials also told us that they participate in the National Partnership as a stakeholder organization, including reviewing the training materials that were distributed to nursing homes. However, ACL officials told us that none of their other past activities have dealt specifically with reducing antipsychotic drug use among older adults outside of nursing homes. While ACL has not focused on reducing antipsychotic drug use among older adults outside of nursing homes, ACL is responsible for other parts of the National Alzheimer’s Plan related to improving dementia care in the community. ACL partners with national groups to share information on dementia-related issues such as caring for minority populations with dementia and preventing elder abuse and neglect. As part of this work, ACL works with organizations, such as the Alliance for Aging Research and the National Family Caregiver Alliance, to share research, host webinars and presentations, and promote issues through social media. ACL also funds grants for state long-term care ombudsmen that are responsible for advocating for older adults living in nursing homes, assisted living facilities, and other residential settings for older adults. Stakeholder groups we spoke to indicated that educational efforts similar to those provided under the National Partnership should be extended to those providing care to older adults in other settings, such as hospitals and assisted living facilities. Some stakeholders noted that some of the same material regarding non-pharmacological interventions could be shared with caregivers in these other care settings. Many experts we spoke with said that many nursing home residents come to the nursing home already on an antipsychotic drug. Extending educational efforts to caregivers and providers outside of the nursing home could help lower the use of antipsychotics among older adults with dementia living both inside and outside of nursing homes. The decision to prescribe an antipsychotic drug to an older adult with dementia is dependent on a number of factors, according to experts in the field, and must take into account the possible benefits of managing behavioral symptoms associated with dementia against potential adverse health risks. In some cases, the benefits to prescribing the drugs may outweigh the risks. HHS has taken important steps to educate and inform nursing home providers and staff on the need to reduce unnecessary antipsychotic drug use and ways to incorporate non-pharmacological practices into their care to address the behavioral symptoms associated with dementia. However, similar efforts have not been directed toward caregivers of older adults living outside of nursing homes, such as those in assisted living facilities and private residences. Targeting this segment of the population is equally important given that over 1.2 million Medicare Part D enrollees living outside of nursing homes were diagnosed with dementia in 2012 and Medicare Part D pays for antipsychotic drugs prescribed to these individuals. While the extent of unnecessary prescribing of antipsychotic drugs is unknown, older adults with dementia living outside of nursing homes are also at risk of the same dangers associated with taking antipsychotics drugs as residents of nursing homes. In fact, the National Alzheimer’s Project Act was not limited to the nursing home setting, but calls upon HHS to develop and implement an integrated national plan to address dementia. HHS’s National Alzheimer’s Plan addresses antipsychotic drug prescribing in nursing homes only, however, and HHS activities to reduce such drug use have primarily focused on older adults residing in nursing homes. Given that HHS does not specifically target its outreach and education efforts relating to antipsychotic drug use to settings other than nursing homes, older adults living outside of nursing homes, their caregivers, and their clinicians in these settings may not have access to the same resources about alternative approaches to care. By expanding its outreach and educational efforts to settings outside nursing homes, HHS may be able to help reduce any unnecessary reliance on antipsychotic drugs for the treatment of behavioral symptoms of dementia for all older adults regardless of their residential setting. We recommend that the Secretary of HHS expand its outreach and educational efforts aimed at reducing antipsychotic drug use among older adults with dementia to include those residing outside of nursing homes by updating the National Alzheimer’s Plan. We provided a draft of this report to HHS for comment. In its written response, reproduced in appendix III, HHS concurred with our recommendation, stating that the agency will support efforts to update the National Alzheimer’s Plan through continued participation on the Federal National Alzheimer’s Project Act Advisory Council. HHS also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This appendix describes our methodology for analyzing the 2012 prescribing of antipsychotic drugs for older adults with dementia in nursing homes and other settings, as well as for analyzing Medicare Part D plan payments for these antipsychotic drug prescriptions. It also describes our efforts to ensure the reliability of the data. We used two primary data sources to examine antipsychotic drug prescribing for older adults with dementia: the Medicare Part D Prescription Drug Event (PDE) data to identify antipsychotic drug prescribing for Medicare Part D enrollees in and outside of the nursing home, and the Long Term Care Minimum Data Set (MDS) to identify antipsychotic drug prescribing for all nursing home residents, regardless of Medicare Part D enrollment. To estimate the extent to which older adults residing inside and outside of nursing homes are prescribed antipsychotic drugs, we first analyzed 2012 PDE data for individuals with dementia. We used the Medicare Part D PDE data because Medicare is the primary source of insurance coverage for individuals over the age of 65 and approximately 63 percent of Medicare beneficiaries were enrolled in Medicare Part D in 2012. To identify individuals living in nursing homes, we combined the PDE claims data with data from the MDS, which includes nursing home assessments for all individuals living in nursing homes, regardless of insurance coverage. We also used data from the Medicare Master Beneficiary Summary File (MBSF), as well as the Medicare Part D Risk File to identify diagnoses, including dementia diagnoses and diagnoses for certain conditions for which the Food and Drug Administration (FDA) has approved the use of antipsychotics drugs. We excluded from our estimates individuals with dementia also diagnosed with one of these FDA-approved conditions for antipsychotic drugs—schizophrenia and bipolar disorder. The Medicare Part D Risk File contains diagnoses based on claims from the previous year for each enrollee, so our diagnosis categories may be conservative estimates as they did not take into account longer-standing or newer diagnoses. We also excluded enrollees with outlier data, enrollees with less than 12 months of Medicare Part D enrollment in 2012, and those enrollees who died in 2012 because they did not have complete Medicare Part D data for the entire year. Finally, we excluded enrollees who resided outside of the 50 states and the District of Columbia. For these analyses, we define an individual as having been prescribed an antipsychotic drug if they were prescribed at least one prescription for an antipsychotic drug during the year, regardless of how many days supply are covered by the prescription. We identified relevant national drug codes (NDC) using a list of generic names for antipsychotic drugs, and, using those codes, we determined the number and percent of Medicare Part D enrollees who were prescribed an antipsychotic drug in 2012. The specific drugs included are listed in table 6. Within the nursing home population, our analysis of PDE data specifically identified those with a long stay in the nursing home—defined by the Centers for Medicare & Medicaid Services (CMS) as more than 100 days—because drugs for individuals with short stays—100 days or less—are generally covered under Medicare Part A, not Part D. We disaggregated the data to examine certain characteristics, such as gender, age, and geographic location. To supplement our analysis of the Medicare Part D data for the nursing home population, we also analyzed 2012 data on antipsychotic prescribing and diagnoses among nursing home residents available in the MDS. This allowed us to look at a more comprehensive population of nursing home residents—all residents in a Medicare or Medicaid certified nursing home—and to examine prescribing rates by length of stay, using steps identified by CMS based on dates reported in the nursing home assessments. In addition to excluding residents with dementia also diagnosed with schizophrenia and bipolar disorder, we excluded residents with Tourette syndrome, a condition for which FDA has approved the use of certain antipsychotics, as well as Huntington’s disease, a condition for which CMS guidance has recognized antipsychotics as an acceptable treatment. Individuals with both dementia and at least one of these diagnoses accounted for about 7 percent of nursing home residents with dementia overall. We also excluded residents with outlier identification codes or other outlier data, residents under the age of 65, and residents in facilities outside of the 50 states and the District of Columbia. We included only those residents that lived through 2012 so that there was a complete year of data for each resident and because antipsychotic drugs can be used in a hospice setting to make residents more comfortable at the end of their lives. For this analysis, we determined an individual was prescribed an antipsychotic drug if any nursing home assessment during 2012 indicated the resident took an antipsychotic drug during the previous 7 days, and we include any instance where antipsychotic use is We disaggregated the data to examine certain documented.characteristics, such as gender, age, and geographic location. To identify what Medicare Part D plans paid for antipsychotic drugs prescribed to older adults with dementia in 2012, we identified individuals with dementia using the Medicare Part D Risk File, and calculated plan payments for those enrollees using the PDE claims data. We also calculated plan payments for the most commonly prescribed antipsychotic drugs, and used the National Plan and Provider Enumeration System (NPPES) to identify the breakdown of prescriber specialties listed on antipsychotic drug claims under Medicare Part D in 2012 to calculate the share of plan payments for prescriptions from the specialties with the most antipsychotic prescribing for individuals with dementia. We ensured the reliability of the MDS data, Medicare PDE data, Medicare Part D Risk File data, MBSF data, Red Book data, and NPPES data used in this report by performing appropriate electronic data checks, reviewing relevant documentation, and interviewing officials and representatives knowledgeable about the data, where necessary. We found the data were sufficiently reliable for the purpose of our analyses. We conducted this performance audit from January 2014 through January 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To identify what is known from published research about factors contributing to the prescribing of antipsychotic drugs to older adults with dementia, we conducted a literature search among recently published articles; specifically, we searched for relevant articles published from January 1, 2009, through March 31, 2014. We conducted a structured search of various databases for relevant peer reviewed and industry journals including MEDLINE, BIOSIS Previews, and ProQuest. Key terms included various combinations of “antipsychotic,” “dementia,” “elderly,” “older adults,” “nursing homes,” “community,” “assisted living,” “home health,” “medication management,” and “medication monitoring.” From all database sources, we identified 386 articles. We first reviewed the abstracts for each of these articles for relevancy in identifying contributing factors related to the use of antipsychotic drugs both inside and outside of nursing homes. For those articles we found relevant, we reviewed the full article and excluded those where the research (1) was conducted outside the United States; (2) included individuals less than 65 years of age; or (3) was an editorial submission. We added one article that could be linked to original research outside of the research cited in the article. After excluding these articles and including others, 42 articles remained: 22 focused on nursing homes; 11 focused on settings outside of nursing homes; 7 focused on both settings; and in 2 articles, the settings were either unclear or undetermined. Articles were then coded by analysts according to whether they identified contributing factors for use of antipsychotic drugs. We found 18 that contained detailed reasons that contribute to antipsychotic drug use among older adults: Bowblis, J. R., S. Crystal, O. Intrator, and J. A. Lucas. “Response to Regulatory Stringency: The Case of Antipsychotic Medication Use in Nursing Homes.” Health Economics, vol. 21 (2012). Briesacher, B. A., J. Tjia, T. Field, K. M. Mazor, J. L. Donovan, A. O. Kanaan, L. R. Harrold, C. A. Lemay, and J. H. Gurwitz. “ Nationwide Variation in Nursing Home Antipsychotic Use, Staffing and Quality of Care.” Abstracts of the 28th ICPE 2012, (2012). Briesacher, B. A., J. Tjia, T. Field, D. Peterson, and J. H. Gurwitz. “Antipsychotic use Among Nursing Home Residents.” The Journal of American Medical Association, vol. 309, no. 5 (2013). Chen, Y., B. A. Briesacher, T. S. Field, J. Tjia, D. T. Lau, and J. H. Gurwitz. “Unexplained Variation across U.S. Nursing Homes in Antipsychotic Prescribing Rates.” Archives of Internal Medicine, vol. 170, no. 1 (2010). Crystal, S., M. Oflson, C. Huang, H. Pincus, and T. Gerhard. “Broadened Use of Atypical Antipsychotics: Safety, Effectiveness, and Policy Challenges: Expanded Use of these Medications, Frequently Off-label, Has Often Outstripped the Evidence Base for the Diverse Range of Patients Who Are Treated with Them.” Health Affairs, vol. 28, no. 5 (2009). Department of Health and Human Services – Office of Inspector General, Medicare Atypical Antipsychotic Drug Claims for Elderly Nursing Home Residents,” OEI-07-08-00150, May 2011. Fung, V., M. Price, A. B. Busch, M. B. Landrum, B. Fireman, A. Nierenberg, W. H. Dow, R. Hui, R. Frank, J. P. Newhouse, and J. Hsu. “Adverse Clinical Events among Medicare Beneficiaries Using Antipsychotic Drugs: Linking Health Insurance Benefits and Clinical Needs.” Medical Care, vol. 51, no. 7 (2013). Healthcare Management Solutions, LLC and the Meyers Primary Care Institute at the University of Massachusetts Medical School. Antipsychotic Drug Use Project Final Report (Columbia, Md.: January 2013). Kamble, P., J. Sherer, H. Chen, and R. Aparasu. “Off-Label Use of Second-Generation Antipsychotic Agents among Elderly Nursing Home Residents.” Psychiatric Services, vol. 61, no. 2 (2010). Kamble, P., H. Chen, J. T. Sherer, and R. R. Aparasu. “Use of Antipsychotics among Elderly Nursing Home Residents with Dementia in the United States: An Analysis of National Survey Data.” Drugs & Aging, vol. 26, no. 6 (2009). Lemay, C. A., K. M. Mazor, T. S. Field, J. Donovan, A. Kananaan, B. A. Briesacher, S. Foy, L. R. Harrold, J. H. Gurwitz, and J. Tjia. “Knowledge of and Perceived Need for Evidence-Based Education about Antipsychotic Medications among Nursing Home Leadership and Staff.” The Journal of American Medical Directors Association, vol. 14, no. 12 (2013). Lucas, J. A., S. Chakravarty, J. R. Bowblis, T. Gerhard, E. Kalay, E. K. Paek, and S. Crystal. “Antipsychotic Medication Use in Nursing Homes: A Proposed Measure of Quality.” International Journal of Geriatric Psychiatry (2014). Molinari, V. A., D. A. Chiriboga, L. G. Branch, J. Schinka, L. Shonfeld, L. Kos, W. L. Mills, J. Krok, and K. Hyer. “Reasons for Psychiatric Medication Prescription for New Nursing Home Residents.” Aging & Mental Health, vol. 15, no.7 (2011). Rhee, Y., J. G. Cernansky, L. L. Emanuel, C. G. Chang, and J. W. Shega. “Psychotropic Medication Burden and Factors Associated with Antipsychotic Use: An Analysis of a Population-Based Sample of Community-Dwelling Older Persons with Dementia.” The Journal of American Geriatrics Society, no. 59 (2011). Saad, M., M. Cassagnol, and E. Ahmed. “The Impact of FDA’s Warning on the Use of Antipsychotics in Clinical Practice: A Survey.” The Consultant Pharmacist, vol. 25, no. 11 (2010). Sapra, M., A. Varma, R. Sethi, I. Vahia, M. Chowdhury, K. Kim, and R. Herbertson. “Utilization of Antipsychotics in Ambulatory Elderly with Dementia in an Outpatient Setting.” Federal Practitioner, (2012). Tjia, J., T. Field, C. Lemay, K. Mazor, M. Pandolfi, A. Spenard, S. Ho, A. Kanaan, J. Donovan, J. H. Gurwitz, and B. Briesacher. “Antipsychotic Use in Nursing Homes Varies By Psychiatric Consultant.” Medical Care, vol. 52, no. 3. (2014). Watson-Wolfe, K., E. Galik, J. Klinedinst, and N. Brandt. “Application of the Antipsychotic Use in Dementia Assessment Audit Tool to Facilitate Appropriate Antipsychotic Use in Long Term Care Residents with Dementia.” Geriatric Nursing, vol. 35 (2014). In addition to the contact named above, Lori Achman, Assistant Director; Todd D. Anderson; Shaunessye D. Curry; Leia Dickerson; Sandra George; Kate Nast Jones; Ashley Nurhussein-Patterson; and Laurie Pachter made key contributions to this report.
Dementia affects millions of older adults, causing behavioral symptoms such as mood changes, loss of communication, and agitation. Concerns have been raised about the use of antipsychotic drugs to address the behavioral symptoms of the disease, primarily due to the FDA's boxed warning that these drugs may cause an increased risk of death when used by older adults with dementia and the drugs are not approved for this use. GAO was asked to examine psychotropic drug prescribing for older adult nursing home residents. In this report, GAO examined (1) to what extent antipsychotic drugs are prescribed for older adults with dementia living inside and outside nursing homes, (2) what is known from selected experts and published research about factors contributing to the such prescribing, and (3) to what extent HHS has taken action to reduce the use of antipsychotic drugs by older adults with dementia. GAO analyzed multiple data sources including 2012 Medicare Part D drug event claims and nursing home assessment data; reviewed research and relevant federal guidance and regulations; and interviewed experts and HHS officials. Antipsychotic drugs are frequently prescribed to older adults with dementia. GAO's analysis found that about one-third of older adults with dementia who spent more than 100 days in a nursing home in 2012 were prescribed an antipsychotic, according to data from Medicare's prescription drug program, also known as Medicare Part D. Among Medicare Part D enrollees with dementia living outside of a nursing home that same year, about 14 percent were prescribed an antipsychotic. (See figure.) Experts and research identified patient agitation or delusions, as well as certain setting-specific characteristics, as factors contributing to the prescribing of antipsychotics to older adults. For example, experts GAO spoke with noted that antipsychotic drugs are often initiated in hospital settings and carried over when older adults are admitted to a nursing home. In addition, experts and research have reported that nursing home staff levels, particularly low staff levels, lead to higher antipsychotic drug use. Agencies within the Department of Health and Human Services (HHS) have taken several actions to address antipsychotic drug use by older adults in nursing homes, as described in HHS's National Alzheimer's Plan; however, none have been directed to settings outside of nursing homes, such as assisted living facilities or individuals' homes. While the National Alzheimer's Plan has a goal to improve dementia care for all individuals regardless of residence, HHS officials said that efforts to reduce antipsychotic use have not focused on care settings outside nursing homes, though HHS has done work to support family caregivers in general. Stakeholders GAO spoke to indicated that educational efforts similar to those provided for nursing homes should be extended to other settings. Extending educational efforts to caregivers and providers outside of the nursing home could help lower the use of antipsychotics among older adults with dementia living both inside and outside of nursing homes. GAO recommends that HHS expand its outreach and educational efforts aimed at reducing antipsychotic drug use among older adults with dementia to include those residing outside of nursing homes by updating the National Alzheimer's Plan. HHS concurred with this recommendation.
In fiscal year 1995, the Department of Defense (DOD) plans to spend over $79 billion for research, development, test, evaluation, and production of weapon systems. While DOD has acquired some of the most technologically advanced and effective weapon systems, DOD has often been criticized for not acquiring the systems in the most efficient manner. As weapon system programs progress through the phases of the acquisition process, they are subject to review at major decision points called milestones. The milestone review process is predicated on the principle that systems advance to higher acquisition phases by demonstrating that they meet prescribed technical specifications and performance thresholds. Figure 1.1 illustrates the DOD’s weapon system acquisition process. At milestone 0, a determination is made about whether an identified mission need warrants a study of alternative concepts to satisfy the need. If warranted, the program is approved to begin the concept exploration and definition phase. At milestone I, a determination is made about whether a new acquisition program is warranted. If warranted, initial cost, schedule, and performance goals are established for the program, and authorization is given to start the demonstration and validation phase. At milestone II, a determination is made about whether continuation of development, testing, and preparation for production is warranted. If warranted, authorization is given to start the engineering and manufacturing development phase. Also, approval of this phase will often involve a commitment to low-rate initial production (LRIP). At milestone III, a determination is made about whether the program warrants a commitment to build, deploy, and support the system. DOD acquisition policy states that program risks shall be assessed at each milestone decision point before approval is granted for the next phase. The policy adds that test and evaluation shall be used to determine system maturity and identify areas of technical risk. Operational test and evaluation (OT&E) is a key internal control to ensure that decisionmakers have objective information available on a weapon system’s performance, to minimize risks of procuring costly and ineffective systems. OT&E has been defined as (1) the field test, under realistic conditions, of any item of (or key component of) weapons, equipment, or munitions for the purpose of determining its effectiveness and suitability for use in combat by typical military users and (2) the evaluation of the results of such a test. Over a period of many years, the Congress has been concerned about the performance of weapon systems being acquired by DOD. As early as 1972, the Congress required DOD to provide it with information on the OT&E results of major weapon systems before committing them to production. However, the Congress continued to receive reports from the DOD Inspector General (DOD-IG), us, and others that (1) weapon systems were not being adequately tested before beginning production, (2) fielded systems were failing to meet their performance requirements, and (3) OT&E being conducted on weapon systems was of poor quality. In the late 1970s and early 1980s, the Congress enacted a series of laws to ensure that U.S. military personnel receive the best weapon systems possible and that the U.S. government receives best value for the defense procurement dollar. Among other things, these laws specified that independent OT&E be conducted; established the Office of the Director, Operational Test and Evaluation (DOT&E), and assigned it specific oversight duties and responsibilities; specified that OT&E of a major defense acquisition program may not be conducted until DOT&E approves the adequacy of the plans for that OT&E; required that a major system may not proceed beyond LRIP until its initial OT&E is completed; and required that DOT&E analyze the results of OT&E conducted for each major defense acquisition program and, prior to a final decision to proceed beyond LRIP, report on the adequacy of the testing and whether the results confirm that the items tested are operationally effective and suitable for combat. In the late 1980s, the Congress found that DOD was acquiring a large portion of the total program quantities, using the LRIP concept, without successfully completing OT&E. In the National Defense Authorization Act for Fiscal Years 1990 and 1991 (P.L. 101-189), the Congress addressed this situation by including a definition of LRIP and a requirement that the determination of the LRIP quantities to be procured be made when a decision is made to enter engineering and manufacturing development. According to the act, LRIP was defined as the minimum quantity needed to (a) provide production-representative articles for OT&E, (b) establish an initial production base, and (c) permit orderly ramp-up to full-rate production upon completion of OT&E. In the conference report for the act, the conferees indicated that they did not condone the continuous reapproval of LRIP quantities that eventually total a significant percentage of the total planned procurement. Also, the conferees granted an exception to the LRIP legislation for ship and satellite programs because of their inherent production complexity, small number, high unit cost, and long unit production periods. However, they directed the Secretary of Defense to develop regulations that capture the spirit of the LRIP legislation as it applies to these programs. This special consideration for ships and satellites carries with it additional reporting requirements to improve the oversight of these programs. Finally, in the National Defense Authorization Act for Fiscal Year 1994, the Congress required that the Secretary of Defense ensure that appropriate, rigorous, and structured testing be completed prior to LRIP of any electronic combat or command, control, and communications countermeasure system. Senators David Pryor and William V. Roth, Jr., requested that we review DOD’s use of LRIP in the acquisition of major defense programs. Specifically, the Senators asked that we determine whether LRIP policies were resulting in the production of systems with adequate performance capabilities and the legislation underlying the LRIP policies was adequate. We analyzed the legislation and DOD policies governing the production and testing of weapon systems, particularly those dealing with (1) the purposes of LRIP, (2) the criteria or requirements for entering LRIP and full-rate production, and (3) the testing requirements related to this process. We used the results of our extensive body of work from the past decade or so on defense acquisition programs and the acquisition processto determine whether the LRIP concept, as currently authorized and practiced by DOD, has resulted in a premature commitment to production of both major and nonmajor systems. We reviewed the 1993 report of the DOD-IG on LRIP and held discussions with the DOD-IG staff. We gathered and summarized data on numerous ongoing system acquisition programs (both major and nonmajor programs) and supplemented that information with discussions with officials from the Office of the Secretary of Defense and the military services. In addition, we gathered and analyzed information on the advantages and disadvantages of conducting OT&E before LRIP (for both major and nonmajor systems). We also held discussions with those officials on DOD’s current acquisition strategies and OT&E policies and practices. This review was conducted from April 1993 to May 1994 in accordance with generally accepted government auditing standards. Our extensive body of work over the years has amply demonstrated that improper usage of LRIP has been widespread. Many major and nonmajor systems from each of the services have been prematurely committed to production, which often results in problems being found after a substantial number of units have been produced and a significant commitment made to the entire procurement program. In addition, contrary to the statutory emphasis on minimum LRIP quantities and conferee statements, many programs continue in LRIP for prolonged periods. DOD’s continuing reluctance to employ the discipline of early OT&E is evident in each of the services and in many major and nonmajor programs. Adequate controls have not been established over the start and continuation of LRIP. A requirement to successfully complete enough independent testing in an operational environment to ensure that the item meets its key performance parameters before LRIP starts would be feasible in most cases and would be an effective management control over the premature start of production. Over the years, we have found numerous instances from all three services in which production of major and nonmajor systems was permitted to begin and continue based not on the systems’ technical maturity, but on schedule or other considerations. DOD has frequently committed programs to production without assurance that the systems would perform satisfactorily. Many of the weapon systems that start production prematurely later experience significant operational effectiveness and/or suitability problems. As a result, major design changes were often needed to correct the problems, additional testing was needed to verify that the corrective action was effective, and costly retrofits were needed for any delivered units. A few of the many examples of premature and extensive commitments to production of major and nonmajor systems are shown in the following tables. Table 2.1 shows systems that entered LRIP before any operational tests were conducted and later experienced significant problems during the tests. Table 2.2 shows systems that were subjected to early operational tests but were allowed to enter LRIP even though the performance deficiencies were not corrected. Programs that enter production prematurely often require more time and resources than originally planned to correct problems and to meet the requirements for full-rate production. LRIP is often continued, despite the evidence of technical problems, well beyond that needed to provide test articles and to establish an initial production capability. As a result, major production commitments are often made during LRIP. In the conference report for the LRIP legislation, the conferees stated that they did not intend to authorize the continuance of LRIP on an indefinite basis. Nevertheless, the existing LRIP legislation does not include any specific principles or guidelines on when and how programs should begin LRIP, on the type and amount of testing to be done before LRIP, on how much LRIP can or should be done, or under what circumstances LRIP should be curtailed or stopped. Instead, the emphasis has been placed almost entirely on the full-rate production decision, at which point the law requires, among other things, that a report be provided on the adequacy of the testing conducted and an assessment be made of the system’s operational effectiveness and suitability. Although programs are delayed getting approval for full-rate production, LRIP is rarely stopped or slowed significantly. As a result, the decision to start LRIP, in many cases, is also the de-facto full-rate production decision. DOD’s written policies provide that acquisition strategies be event-driven and link major contractual commitments and milestone decisions to demonstrated accomplishments in development, test, and initial production. However, DOD policies state that a primary goal in developing an acquisition strategy shall be to minimize the time and cost of satisfying a need consistent with common sense and sound business practices. In addition, DOD’s policies state, but without detailed requirements, that OT&E should be conducted throughout the acquisition process. However, while DOD is statutorily required to conduct OT&E before full-rate production is approved, DOD’s policies permit LRIP to begin before any OT&E is conducted. The point at which LRIP begins is not a required milestone under DOD policy. As a result, for many major defense acquisition programs, the services do not plan to conduct any OT&E prior to the start of LRIP. It has been and continues to be the exception, rather than the rule, for programs to include OT&E before LRIP starts. In some instances, the services plan to start LRIP even though they plan to use developmental or prototype units for their initial OT&E, not LRIP units. Although not required by written DOD or Navy policy, the Navy now performs a limited phase of OT&E before LRIP to prepare for later phases of OT&E on some of its programs. However, these programs are not required to meet specific testing-related criteria before entering LRIP. As shown in table 2.2, even when some OT&E was conducted prior to the start-up of production, identified problems were not verified as corrected, and significant performance problems emerged later in the program. Over the past several years, DOD has stated that it planned to reemphasize the need for OT&E as early as possible in the acquisition process. However, we have not detected any reemphasis on early OT&E, and DOD’s 1991 revision of its key acquisition directives did not address this issue. DOD acquisition and testing officials concede that there has not been any major reemphasis on early OT&E. In fact, DOD has recently supported legislative proposals that would reduce the current overall requirements to conduct OT&E. DOD has recognized that reducing the amount of production prior to completing development provides for greater design maturity, which increases the likelihood of meeting system requirements and avoiding retrofit costs. In commenting on our 1992 report, DOD officials said they were lessening the amount of concurrent development and production in weapon programs due to the end of the Cold War. In 1992, the Under Secretary of Defense for Acquisition also stated that the need to replace existing weapon systems in order to maintain a significant technological advantage was no longer as urgent. However, acquisition strategies of many current programs do not reflect these positions. DOD’s acquisition practices continue to stress the importance of minimizing the time to deploy new or improved weapon systems. Highly concurrent acquisition strategies continue to be featured in many current major and nonmajor programs, with little, if any, OT&E expected until well after the start of production and a significant commitment is made to the procurement of the system. Our analysis of the current selected acquisition reports shows that many programs continue to postpone initial OT&E until well after the start of production. LRIP is expected to be approved in February 1996 for the Army’s Secure Mobile Anti-Jam Reliable Tactical Terminal. Initial OT&E will not be completed until July 1998, by which time a total of 125 units, or 3 years of LRIP, is planned to be approved out of a total program quantity of 367 units. The LRIP decision for the Air Force’s F-22 aircraft program is expected in June 1998, and initial OT&E is to be conducted from March to November 2001. Thus, 1 year of preproduction verification and 4 years of LRIP—80 aircraft out of a total quantity of 442 units—are planned to be approved before completion of OT&E. The Navy plans to procure 106 of the 630 planned Multifunctional Information Distribution Systems before OT&E is completed in December 2000 and a full-rate production decision is made in June 2001. In addition, 42 prototype systems are to be built as part of the system development effort. These programs feature major commitments to LRIP before development is completed and before any OT&E is completed, even though developmental prototypes are expected to be available for testing in these programs. Accordingly, a substantial and frequently irreversible commitment to production will have been made before the results of independent testing are available to decisionmakers. In its 1993 report, the DOD-IG found that major defense acquisition programs were entering LRIP without meeting development, testing, and production readiness prerequisites. As a result, the DOD-IG concluded that DOD incurred excessive program risk of overcommitment to production of systems without obtaining assurance that the design is stable, potentially operationally acceptable, and capable of being produced efficiently. Among other things, the DOD-IG recommended that DOD (1) provide guidance on the specific minimum required program accomplishments for entry into and continuation of LRIP and (2) require that program-specific exit criteria be established for entry into and continuation of LRIP. DOD is currently considering what, if any, actions will be taken in light of the DOD-IG’s recommendations. The decision to begin LRIP should be given much more attention because decisionmakers find it very difficult to stop or slow down programs once they are in production. Given the cost risks involved and DOD’s inability or unwillingness to curtail production after it starts, we agree with the DOD-IG that controls are urgently needed over the start and continuation of LRIP. A key criterion for all programs beginning LRIP should be the completion of a phase of independent testing in an operational environment. During such testing, some problems should be expected. However, enough realistic testing should be conducted for the services’ independent testing agencies and/or DOT&E to be able to certify to the decision authority that (1) the system’s developmental testing is essentially complete and the basic results have been validated in an operational environment, (2) the system has clearly shown that it can meet the key parameters among its minimum acceptable performance requirements, (3) the system has clearly demonstrated the potential to fully meet all of its minimum acceptable requirements for performance and suitability without major or costly design changes, and (4) the system should be able to readily complete its remaining OT&E in time to support the planned full-rate production decision. Comprehensive testing of a system’s operational suitability features, such as supportability, may not be possible during early independent testing. However, the testing should be sufficient to reveal major suitability problems. Conducting OT&E before LRIP will not, by itself, result in a better weapon system, but it is the best means available to guard against the premature start of production. Decisionmakers need verifiable information on system design maturity and where corrective actions are needed before production start-up. Every effort should be made to correct problems in development, not in production, because early fixes are less expensive, easier to implement, and less disruptive. In today’s national security environment, there should be very few cases in which an urgent need dictates that DOD start production without assurance that the system will work as intended. We realize that, for some programs, a significant effort (personnel and facilities) may be needed to produce one or more prototypes for a phase of early OT&E. These programs would typically involve inherent fabrication complexity, small procurement quantities, high unit cost, and long unit production periods. To suspend that type of effort while OT&E is underway could be costly and disruptive. Alternatively, key subsystems should be independently tested on surrogate platforms before production. Once underway, production should be limited until acceptable OT&E results are obtained on the entire system. We believe that LRIP should be used to focus on (1) addressing producibility and product quality issues; (2) producing just enough systems to support initial OT&E, to prove out the production process, and to sustain the production line; and (3) testing those systems and correcting any deficiencies. A limit on the quantity that can be produced under LRIP would provide an opportunity to correct problems that are identified during initial OT&E, without incurring the risk of overproducing under the LRIP phase. We recommend that the Secretary of Defense revise DOD’s acquisition policies in the following ways: Require that, before entry into LRIP, programs (with the exception of ships, satellites, and those other programs that involve inherent fabrication complexity, small procurement quantities, high unit costs, and long unit production periods) plan, buy prototypes for, and conduct enough realistic testing for the service’s independent testing agency and/or DOT&E to be able to certify to the decision authority that (1) the system’s developmental testing is essentially complete and the basic results of that testing have been validated in an operational environment; (2) the system has clearly shown that it can meet the key parameters among its minimum acceptable performance requirements; (3) the system has clearly demonstrated the potential to fully meet all of its minimum acceptable requirements for performance and suitability without major or costly design changes; and (4) the system should be able to readily complete its remaining OT&E in time to support the planned full-rate production decision. Require that those programs excluded from the requirement to test prototypes instead test all key subsystems in an operational environment before entry into LRIP. Adopt the recommendations made by the DOD-IG regarding controls over the start and continuation of LRIP such as (1) providing guidance on the specific minimum required program accomplishments for entry into and continuation of LRIP and (2) requiring that program-specific exit criteria be established for entry into and continuation of LRIP. We also recommend that the Secretary of Defense work with the service secretaries to ensure that these policies are implemented for the acquisition of both major and nonmajor systems. The legislation defining LRIP has not been effective in accomplishing its purpose, which was to limit the commitment to major production quantities pending satisfactory completion of OT&E. Therefore, we recommend that the Congress legislatively mandate (1) that certain OT&E requirements be met before LRIP may start and (2) specific limits on the number of units allowed to be produced during LRIP. Specifically, the Congress may wish to require that all defense acquisition programs (major and nonmajor) conduct enough realistic testing on the entire system or key subsystems to ensure that its key performance parameters are met before LRIP is permitted to start. In addition, the Congress may wish to (1) specify a percentage (10 percent, for example) of a system’s total procurement beyond which a program may not proceed during LRIP and/or (2) amend 10 U.S.C. 2400 (by deleting subsection (b)(3)) to preclude the use of LRIP authority to ramp-up the production rate prior to the successful completion of OT&E.
Pursuant to a congressional request, GAO reviewed the Department of Defense's (DOD) use of low-rate initial production (LRIP) in its systems acquisition programs, focusing on whether: (1) DOD LRIP practices result in the production of adequate systems; and (2) the legislation underlying LRIP policies is adequate. GAO found that: (1) despite congressional emphasis on the need for operational test and evaluation (OT&E) prior to system production, legislation and DOD policies permit LRIP to start before any OT&E is conducted because there are no specific guidelines on the type and amount of testing required prior to LRIP; (2) the lack of guidelines has resulted in substantial inventories of unsatisfactory weapons that need costly modifications and some deployments of substandard systems to combat forces; (3) correction of system deficiencies in prematurely produced systems lengthens production schedules and increases resource consumption; (4) major production decisions are often made during LRIP; (5) LRIP severely limits Congress' and DOD decisionmakers' options for dealing with deficient systems; (6) DOD needs accurate, independent information on system performance and suitability to minimize the risks of procuring costly and ineffective systems; and (7) in light of the current national security environment, there should not be an urgent need to start LRIP before system capabilities are adequately tested.
Herbal dietary supplements are traditionally used to alleviate certain medical conditions, such as anxiety, digestive problems, and depression, and to improve general quality of life. However, for many traditional uses, there is not clear scientific evidence to show that they prevent or treat underlying diseases or conditions. Further, some herbal dietary supplements may interact in a potentially harmful manner with some prescription drugs. For example, according to NIH, St. John’s wort can negatively affect the efficacy of antidepressants, HIV treatments, cancer drugs, and anticoagulants, though this is not always noted on product labels. The possibility of adverse drug interactions is one of the reasons that FDA recommends that consumers check with their health practitioners before beginning any supplement regimen. The elderly are particularly at risk from these interactions since recent studies have found that approximately 85 percent of the elderly take at least one prescription drug over the course of a year and 58 percent take three or more. Many herbal supplements have not been exhaustively tested for hazardous interactions with prescription drugs, other supplements, or foods. Under DSHEA, dietary supplements are broadly presumed safe, and FDA does not have the authority to require them to be approved for safety and efficacy before they enter the market, as it does for drugs. However, a dietary supplement manufacturer or distributor of a supplement with a “new dietary ingredient”—an ingredient that was not marketed in the United States before October 15, 1994—may be required to notify FDA at least 75 days before marketing the product, depending on the history of use of the ingredient. Also, all domestic and foreign companies that manufacture, package, label, or hold dietary supplements must follow FDA’s current good manufacturing practice regulations, which outline procedures for ensuring the quality of supplements intended for sale. Under DSHEA, a firm, not FDA, is responsible for determining that any representation or claims made about the dietary supplements it manufactures or distributes are substantiated by adequate evidence to show that they are not false or misleading. Except in the case of a new dietary ingredient, where premarket review for safety data and other information is required by law, a firm does not have to provide FDA with the evidence it relies on to substantiate effectiveness before or after it markets its products. For the most part, FDA relies on postmarket surveillance efforts—such as monitoring adverse event reports it receives from companies, health care practitioners, and individuals; reviewing consumer complaints; and conducting facility inspections—to identify potential safety concerns related to dietary supplements. Once a safety concern is identified, FDA must demonstrate that the dietary supplement presents a significant or unreasonable risk, or is otherwise adulterated, before it can be removed from the market. A product sold as a dietary supplement cannot suggest on its label or in labeling that it treats, prevents, or cures a specific disease or condition without specific approval from FDA. Under FDA regulations, a manufacturer may submit a health claim petition in order to use a claim on its product labeling that characterizes a relationship between the product and risk of a disease, and FDA may authorize it provided the claims meet certain criteria and are authorized by FDA regulations (e.g., diets high in calcium may reduce the risk of osteoporosis). However, manufacturers may make “qualified health claims” when there is emerging evidence for a relationship between a dietary supplement and reduced risk of a disease or condition, subject to FDA’s enforcement discretion. The claim must include specific qualifying language to indicate that the supporting evidence is limited. Dietary supplement labeling may include other claims describing how a dietary ingredient is intended to affect the normal structure or function of the body (e.g. fiber maintains bowel regularity). The manufacturer is responsible for ensuring the accuracy and truthfulness of such claims, but must submit a claim to FDA for review no later than 30 days after marketing it. Because FDA does not confirm the claim—a lack of objection allows the manufacturer to use it—the following disclaimer must be included: “This statement has not been evaluated by the FDA. This product is not intended to diagnose, treat, cure, or prevent any disease.” The manufacturer does not need to provide FDA with documentation, and FDA does not test to determine if the claim is true. In addition, these claims generally may not state that a product is intended to diagnose, mitigate, treat, cure, or prevent a disease or the adverse effects associated with a therapy for a disease, either by naming or describing a specific disease. A claim also cannot suggest an effect on an abnormal condition associated with a natural state or process, such as aging. Context is a consideration; a product’s name and labeling cannot imply such an effect by use of pictures or scientific or lay terminology. Finally, a product cannot claim to be a substitute for a product that is a therapy for a disease, or claim to augment a therapy or drug. To make any of these claims, a manufacturer must submit and receive authorization of a health claim petition. The Federal Trade Commission (FTC) regulates advertising for dietary supplements and other products sold to consumers. FTC receives thousands of consumer complaints each year related to dietary supplements and herbal remedies. FTC has, in the past, taken action against supplement sellers and manufacturers whose advertising was deemed to pose harm to the general public. FDA works with FTC in this area, but FTC’s work is directed by different laws. Consuming high levels of the contaminants for which we tested the 40 products can lead to severe health consequences, such as increased risk of cancer, as noted in table 1. The negative health effects described are, unless otherwise noted, for the acute toxicity in the human body. However, the exact effects of these contaminants on an individual are based on an individual’s specific characteristics. For instance, since lead can build up in the human body, the effect of consuming a potentially dangerous level of lead by a 55-year-old man depends on the amount of lead that man has consumed during his lifetime, among other factors. FDA has not issued any regulations addressing safe or unsafe levels of contaminants in dietary supplements, but both FDA and EPA have set certain advisory levels for contaminants in other foods. The human body’s absorption of many contaminants is governed by intake method, so advisory levels for other foods (e.g., drinking water) cannot be strictly applied to dietary supplements. In addition, EPA sets limits on how much pesticide residue can remain on food and feed products. These pesticide residue limits are known as tolerances and are enforced by FDA. If no residue tolerance has been set for a particular pesticide, any product containing that pesticide residue is considered adulterated and its sale is prohibited by law. See table 2 for a summary of the regulations issued by FDA or EPA regarding some of the contaminants we tested for. Our investigation found examples of deceptive or questionable marketing and sales practices for dietary supplements popular among the elderly (see table 3). The most egregious practices included suspect marketing claims that a dietary supplement prevented or cured extremely serious diseases, such as cancer and cardiovascular disease. Other dietary supplements were claimed to mitigate age-related medical conditions, such as Alzheimer’s disease and diverticular disorder. We also found some claims that followed FDA’s labeling regulations and guidelines, but could still be considered deceptive or questionable and provide consumers with inaccurate information. In addition, while conducting in-person and telephone conversations with dietary supplements sellers, our investigators, posing as elderly consumers, were given potentially harmful medical advice by sales staff, including that they could take supplements in lieu of prescription medication. In making these claims, sellers put the health of consumers at risk. A link to selected audio clips from these calls is available at: http://www.gao.gov/products/GAO-10-662T. Below are details on several cases in which herbal supplement marketing practices were deceptive or questionable and sometimes posed health risks to consumers. All cases of deceptive or questionable marketing and inappropriate medical advice have been referred to FDA and FTC for appropriate action. Case 2: In online materials, this garlic supplement included claims that it would (1) prevent and cure cardiovascular disease, (2) prevent and cure tumors and cancer, (3) prevent obesity, and (4) reduce glycemia to prevent diabetes. According to NIH, all these claims are unproven, and garlic is not recommended for treating these conditions. In fact, for several of these conditions, garlic may interact adversely with common FDA-approved drug treatments. Nowhere in this product’s marketing materials does the seller suggest that consumers should consult their health care providers prior to taking its supplement. While NIH recognizes that garlic may have some anticancer properties, the agency notes that additional clinical trials are needed to conclude whether these properties are strong enough to prevent or treat cancer. Further, studies have shown that garlic may alter the levels of some cancer drugs in the human body, lessening their effectiveness. For diabetes, there are no studies that confirm that garlic lowers blood sugar or increases the release of insulin in humans. In fact, NIH recommends caution when combining garlic with medications that lower blood sugar, and further suggests that patients taking insulin or oral drugs for diabetes be monitored closely by qualified health care professionals. Case 3: According to its labeling, this ginseng supplement—which costs $500 for a 90-day supply—cures diseases, effectively prevents diabetes and cardiovascular disease, and prevents cancer or halts its progression. These claims are unproven—no studies confirm that ginseng can prevent or cure any disease. In fact, NIH recommends that breast and uterine cancer patients avoid ginseng. In addition, ginseng may adversely interact with cancer drugs. The product labeling claims do not differentiate between type 1 and type 2 diabetes. According to NIH, ginseng’s effect on patients with type 1 diabetes is not well studied. While ginseng may lower blood sugar levels in patients with type 2 diabetes, the long-term effects of such a treatment program are unclear, and it is not known what doses are safe or effective. NIH specifically recommends that consumers with type 2 diabetes use proven therapies instead of this supplement. Case 7: While our investigators posed as consumers purchasing dietary supplements, sales staff provided them with an informational booklet regarding an enzyme that claims to “ us against dementia and Alzheimer’s, exhibiting a truly miraculous capacity to optimize mental performance and fight off cognitive decline.” In fact, FDA reviewed the scientific evidence for the active ingredient of this supplement and found that it was not adequate to make such a claim. Because the agency considered such a health claim potentially misleading, FDA provided for the use of a qualified health claim that contains a disclaimer that must accompany the health claim in all labeling in which these claims appear. While the booklet we received does state the FDA disclaimer on the first page, the manufacturer follows it with a rejoinder: “The very cautious language of these claims, which FDA mandates can only be stated word for word, is at best a grudging concession to the extensive clinical research done with . Considering this agency’s legendary toughness against dietary supplements, FDA’s willingness to go this far with the suggests that the FDA must be sure it is safe to take and also that the FDA is unable to deny can improve human brain function.” Case 8: One of our fictitious consumers visited a supplement specialty store looking for a product that would help with high blood pressure. The sales representative recommended a garlic supplement and stated that the product could be taken in lieu of prescribed blood pressure medication. According to NIH, while this herb may lower blood pressure by a small amount, the scientific evidence is unclear. NIH does not recommend this supplement as a treatment for high blood pressure and warns patients to use caution while taking this product with other drugs or supplements that can lower blood pressure. Further, it is not recommended that a consumer start or stop a course of treatment without consulting with his or her health care provider. Even if a sales representative is licensed to dispense medical advice, he or she still does not know the consumer’s patient history, including other drug programs, allergies, and medical conditions, making it potentially dangerous for the sales representative to provide medical advice. Case 9: At a supplement specialty store, one of our investigators posed as an elderly consumer who was having difficulty remembering things. A sales representative recommended one of the store’s ginkgo biloba supplements. The consumer told the representative that he takes aspirin everyday and asked if it was safe to take aspirin and ginkgo biloba together. The sales representative told him that it is completely safe to take the two together. However, according to FDA, if aspirin is taken with the recommended product, it can increase the potential for internal bleeding. We spoke to FDA and FTC regarding these 10 claims, and they agreed that the statements made in product labeling for cases 1 through 6 are largely improper, as the labeling suggests that each product has an effect on a specific disease. For case 7, FDA stated that while the specific claims discussed here are allowable, depending on the context in which they were made, FDA might consider the totality of marketing materials to be improper. FDA also agreed that the claims made to our undercover investigators in cases 8 and 10 were questionable or likely constituted improper disease claims, but that to take action, additional information as to the prevalence and context of the claims would be necessary. For case 9, FDA noted that, since the statement made by sales staff was safe usage information, not a claim about the product’s effects, it would not violate FDA regulations, unless the agency could develop other evidence to show that the claim was false or misleading or constituted an implied disease claim. In addition, FDA and NIH both noted that by definition, no dietary supplement can treat, prevent, or cure any disease. We found trace amounts of at least one potentially hazardous contaminant in 37 of the 40 herbal dietary supplement products we tested, though none of the contaminants were found in amounts considered to pose an acute toxicity hazard to humans. Specifically, all 37 supplements tested positive for trace amounts of lead. Thirty-two also contained mercury, 28 contained cadmium, 21 contained arsenic, and 18 contained residues from at least one pesticide. See appendixes III and IV for the complete results of these tests. The levels of contaminants found do not exceed any FDA or EPA regulations governing dietary supplements or their raw ingredients, and FDA and EPA officials did not express concern regarding any immediate negative health consequences from consuming these 40 supplements. However, because EPA has not set pesticide tolerance limits for the main ingredients of the herbal dietary supplements we tested, the pesticide contaminants exceed FDA advisory levels. FDA agreed that 16 of the 40 supplements we tested would be considered in violation of U.S. pesticide tolerances if FDA, using prescribed testing procedures, confirmed our results. We note that 4 of the residues detected are from pesticides that currently have no registered use in the United States. According to FDA, scientific research has not been done on the long-term health effects from consumption of such low levels of many of these specific contaminants, as current technology cannot detect these trace contaminants when they are diluted in human bloodstreams. We have referred these products to FDA for its review. After reviewing test results with EPA and FDA officials, we also spoke with several of the manufacturers of supplements that had trace amounts of contaminants. The manufacturers we spoke with stated that they ensure that their products are tested for contamination, and that these tests have shown that their products do not contain contaminants in excess of regulatory standards. Manufacturers also stated that they comply with all FDA regulations and follow good manufacturing practices as defined by the agency. While the manufacturers we spoke with were concerned about finding any contaminants in their supplements, they noted that the levels identified were too low to raise any issues during their own internal product testing processes. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made major contributions to this testimony were Jonathan Meyer and Andrew O’Connell, Assistant Directors; John Ahern; Dennis Fauber; Robert Graves; Cristian Ion; Elizabeth Isom; Leslie Kirsch; Barbara Lewis; Flavio Martinez; James Murphy; Ramon Rodriguez; Tim Walker; and John Wilbur. To determine whether sellers of herbal dietary supplements are using deceptive or questionable marketing practices to encourage the use of these products, we investigated a nonrepresentative selection of 22 storefront and mail-order retailers selling herbal dietary supplements. We identified these retailers by searching online using search terms likely to be used by actual consumers and by observing newspaper advertisements. Posing as elderly customers, we asked sales staff at each company a series of questions regarding the potential health benefits of herbal dietary supplements as well as potential interactions with other common over-the- counter and prescription drugs. While our work focused on herbal dietary supplements, we also evaluated claims made regarding nonherbal supplement products during undercover storefront visits and telephone calls. We also reviewed written marketing language used on approximately 30 retail Web sites. We evaluated the accuracy of product marketing claims against health benefit evaluations published through the National Institutes of Health and Food and Drug Administration (FDA). To determine whether selected herbal dietary supplements were contaminated with harmful substances, we purchased 40 unique single- ingredient herbal supplement products from 40 different manufacturers and submitted them to an accredited laboratory for analysis. We selected the types of herbs to purchase based on recent surveys about the supplements usage of the elderly, defined for this report as individuals over the age of 65. These surveys identified the most commonly used herbs among the elderly as chamomile, echinacea, garlic, ginkgo biloba, ginseng, peppermint, saw palmetto, and St. John’s wort. We purchased these 40 unique products from a combination of retail chain storefronts and online or mail-order retailers. For each online retailer, we selected products based primarily on relative popularity according to the site’s list of top sellers. At each retail chain storefront, because of limited selection, we selected only items that would be expected to be sold at all chain locations. All 40 products were submitted to an accredited laboratory where they were screened for the presence of lead, arsenic, mercury, cadmium, and residues from organichlorine and organophosphorous pesticides. These contaminants were selected based on prevalence and the likelihood of negative health consequences due to consumption. The recommended daily intake levels of these contaminants and the likely negative health consequences because of consumption were determined based on a review of relevant health standards and discussions with FDA and Environmental Protection Agency experts. For each herbal dietary supplement product, we submitted one unopened, manufacturer-sealed bottle to the laboratory for analysis. To identify levels of arsenic, cadmium, lead, and mercury, products were analyzed using inductively coupled plasma mass spectrometry according to method AOAC 993.14. Detection limits for these contaminants were .075 milligrams/kilogram, .010 milligrams/kilogram, .005 milligrams/kilogram, and .050 nanograms/gram, respectively. To identify levels of pesticide residues, products were analyzed using a variety of residue-specific methods, including those methods published in the FDA Pesticide Analytical Manual. We did not independently validate the results received with another lab or through any other mechanism. See appendix II for a complete list of analytes and their related detection levels. Detection limit (ppm) Detection limit (ppm) Dacthal (DCPA) Diazinon (O Analog) Endosulfan I (alpha-endosulfan) Detection limit (ppm) Detection limit (ppm) Endosulfan II (beta-Endosulfan) Malathion OA (Malaoxon) Detection limit (ppm) Detection limit (ppm) Quintozene (PCNB) S 421 (Octachlordipropylether) Detection limit (ppm) Detection limit (ppm) Parts per million is a measure equivalent to milligrams per kilogram or milligrams per liter. Chlorpyrifos (Dursban) St. John’s wort gamma-HCH (Lindane) Chlorpyrifos (Dursban) Hexachlorobenzene (HCB) Dacthal (DCPA) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recent studies have shown that use of herbal dietary supplements--chamomile, echinacea, garlic, ginkgo biloba, and ginseng--by the elderly within the United States has increased substantially. Sellers, such as retail stores, Web sites, and distributors, often claim these supplements help improve memory, circulation, and other bodily functions. GAO was asked to determine (1) whether sellers of herbal dietary supplements are using deceptive or questionable marketing practices and (2) whether selected herbal dietary supplements are contaminated with harmful substances. To conduct this investigation, GAO investigated a nonrepresentative selection of 22 storefront and mail-order retailers of herbal dietary supplements. Posing as elderly consumers, GAO investigators asked sales staff (by phone and in person) at each retailer a series of questions regarding herbal dietary supplements. GAO also reviewed written marketing language used on approximately 30 retail Web sites. Claims were evaluated against recognized scientific research published by the National Institutes of Health (NIH) and the Food and Drug Administration (FDA). GAO also had an accredited lab test 40 unique popular single-ingredient herbal dietary supplements for the presence of lead, arsenic, mercury, cadmium, organichlorine pesticides, and organophosphorous pesticides. Certain dietary supplements commonly used by the elderly were deceptively or questionably marketed. FDA statutes and regulations do not permit sellers to make claims that their products can treat, prevent, or cure specific diseases. However, in several cases, written sales materials for products sold through online retailers claimed that herbal dietary supplements could treat, prevent, or cure conditions such as diabetes, cancer, or cardiovascular disease. When GAO shared these claims with FDA and the Federal Trade Commission (FTC), both agreed that the claims were improper and likely in violation of statutes and regulations. In addition, while posing as elderly customers, GAO investigators were often told by sales staff that a given supplement would prevent or cure conditions such as high cholesterol or Alzheimer's disease. To hear clips of undercover calls, see http://www.gao.gov/products/GAO-10-662T . Perhaps more dangerously, GAO investigators were given potentially harmful medical advice. For example, a seller stated it was not a problem to take ginkgo biloba with aspirin to improve memory; however, FDA warns that combining aspirin and ginkgo biloba can increase a person's risk of bleeding. In another case, a seller stated that an herbal dietary supplement could be taken instead of a medication prescribed by a doctor. GAO referred these sellers to FDA and FTC for appropriate action. GAO also found trace amounts of at least one potentially hazardous contaminant in 37 of the 40 herbal dietary supplement products tested, though none in amounts considered to pose an acute toxicity hazard. All 37 supplements tested positive for trace amounts of lead; of those, 32 also contained mercury, 28 cadmium, 21 arsenic, and 18 residues from at least one pesticide. The levels of heavy metals found do not exceed any FDA or Environmental Protection Agency (EPA) regulations governing dietary supplements or their raw ingredients, and FDA and EPA officials did not express concern regarding any immediate negative health consequences from consuming these 40 supplements. While the manufacturers GAO spoke with were concerned about finding any contaminants in their supplements, they noted that the levels identified were too low to raise any issues internal product testing.
SBA’s 7(a) loan program is intended to help businesses obtain credit that they are unable to secure in the conventional lending market. Under the 7(a) program, SBA guarantees loans made by commercial lenders. Borrowers may use 7(a) loan proceeds to establish a new business, expand an existing business, or purchase an existing one, including a franchised business. Loan proceeds can be used to buy equipment, finance working capital, purchase or renovate a building, and pay for other expenses. Currently, the maximum loan amount for a 7(a) loan is $5 million. The average 7(a) loan for fiscal year 2012 was $337,730. Loan maturities vary based on the borrower’s ability to repay and the intended use of loan proceeds. To qualify for a 7(a) loan, the applicant must be deemed creditworthy, have demonstrated an inability to obtain credit elsewhere on reasonable terms from nonfederal sources, and be able to reasonably ensure repayment. Lenders are required to consider these factors for each applicant. In addition, lenders are required to report any fees paid to loan agents, and other agents who assist the borrower during the loan origination process, using the “Fee Disclosure Form and Compensation Agreement” (Form 159). In general, examples of loan agents include: (1) loan packagers, who are agents compensated by loan applicants or lenders to prepare loan applications; (2) referral agents, who refer loan applicants to lenders or vice versa, and may be compensated by either party; and (3) lender service providers, who carry out lender functions in originating, disbursing, servicing, or liquidating SBA loans in return for compensation from lenders. SBA’s Preferred Lenders Program (PLP) is part of SBA’s effort to provide streamlined financial assistance to the small-business community, including franchisees. Under this program, SBA delegates the final credit decision, as well as most servicing and liquidation authority and responsibility, to a group of preferred lenders. SBA relies on these lenders to ensure that borrowers meet the program’s eligibility requirements. SBA considers potential preferred lenders on the basis of their performance records with SBA, and they must have demonstrated a proficiency in processing and servicing SBA-guaranteed loans. In fiscal year 2011, SBA had 3,537 active lenders in the 7(a) program, 545 of which had preferred lender status. SBA’s Office of Credit Risk Management conducts on-site reviews of certain lenders through a risk-based review process. On-site reviews are generally to be conducted on all 7(a) lenders with outstanding balances of $10 million or more on the SBA-guaranteed portions of their loan portfolios. SBA’s risk-based review process is to consider factors such as portfolio performance, SBA management and operations, credit administration practices for both performing and nonperforming loans, and compliance with SBA requirements. According to SBA’s procedures for conducting on-site risk-based lender reviews, SBA can assess a lender as (1) acceptable, which means the lender is managing a satisfactory SBA loan program using prudent lending practices and representing limited financial risk to SBA; (2) acceptable with corrective actions required, indicating the lender may have weaknesses, but it is reasonably expected that the lender can address the issues during the normal course of operations; (3) marginally acceptable with corrective actions required, meaning the lender demonstrates serious deficiencies and demonstrates an inadequate degree of understanding and management of the SBA loan program; and (4) less than acceptable with corrective actions required, which means the lender is operating an SBA loan program with serious deficiencies or represents significant financial risk to SBA. When a borrower with an SBA-guaranteed loan defaults, the lender has the option of submitting a purchase request to SBA to honor the guaranteed portion of the loan. Effective November 15, 2010, SBA defined an early defaulted loan as one in which the borrower defaulted within 18 months of initial disbursement. Prior to that date, early defaulted loans were those that defaulted within 18 months of final disbursement. Early defaulted loans may indicate potential deficiencies in the originating, closing, and servicing of loans. According to SBA’s procedures, the agency must review guaranty purchase requests for early defaulted loans with a higher degree of scrutiny than other defaulted loans. 16 C.F.R. § 436.2. This regulation was issued by the Federal Trade Commission. performance representations in the FDD, which can include average revenue figures and other earnings statements, are optional and can vary by franchise organization. Current regulations stipulate that the financial performance representation must have a reasonable basis and substantiation at the time it was made. Potential borrowers have the option to request additional information from the franchise organization regarding the financial representations made in the FDD. In addition, franchise organizations may provide the names and contact information of current and former franchisees in the FDD. Our analysis of SBA-guaranteed loans to franchisees of the franchise organization approved from January 1, 2000, to December 31, 2011, showed that SBA approved a total of about $38.4 million for 170 loans made by 54 lenders. SBA’s guaranteed portion on these loans amounted to around $28.8 million. Of the total population of 170 loans, we identified 74 defaulted loans, 55 of which (74 percent) were originated by four lenders. Three of these four lenders are preferred lenders that have delegated authority to make lending decisions on behalf of SBA. SBA made guarantee payments of around $11 million on the defaulted loans, including about $8.5 million in guarantee payments on the 55 defaulted loans from these four lenders. Figure 1 illustrates the dollar value of SBA guarantee payments for loans from the four lenders. In addition, figure 1 shows that loans originating with Lender A and Lender B comprised about 64 percent of the $11 million in guarantee payments disbursed by SBA for loans to the franchisees of the franchise organization. Of the 88 loans we reviewed from the four lenders, 55 (about 63 percent) defaulted. In comparison, 19 of the 82 loans (23 percent) that originated at the other 50 lenders to the franchisees defaulted. As shown in figure 2, two lenders—Lender A and Lender B—represented about 82 percent of the defaulted SBA-guaranteed loans to franchisees from the four lenders (45 of the 55 defaulted loans), and over half of the total defaulted SBA- guaranteed loans to franchisees from all the lenders (45 of the 74 defaulted loans). SBA oversees preferred lenders, in part, through its risk-based review process. SBA conducted such reviews on these four lenders, and found in 16 of 17 reviews conducted that the lenders’ management of their SBA loan programs was either acceptable or acceptable with corrective actions required. One of the five reviews for one lender, Lender A, determined the lender’s management was marginally acceptable with corrective actions required, including improvements to the lender’s policies, procedures, and controls for demonstrating certain underwriting decisions. In September 2012, SBA OIG issued a report noting that during SBA’s onsite reviews, the agency did not always recognize the significance of lender weaknesses for 8 of the 16 sampled lenders and it did not require lenders to correct performance problems that could have SBA OIG made exposed SBA to unacceptable levels of financial risks.six recommendations in the report, including proposals that SBA develop and implement a process for assessing lender weaknesses in terms of their risk to the agency, and that SBA tailor the scope of on-site reviews of lenders to identify and address the weaknesses underlying lender ratings. SBA agreed with the recommendations and the report noted it has taken steps to address concerns in the lender oversight process. For additional details on SBA’s risk-based review of the four lenders, see appendix II. In addition, as part of our investigative work, we interviewed the owners of 22 franchisees of the franchise organization to obtain background information on the SBA loan process and efforts to start their businesses. One franchisee we interviewed obtained an SBA-guaranteed loan that defaulted within 9 months of final disbursement, making it an early defaulted loan. The franchisee highlighted challenges related to insufficient working capital and unexpected expenses. The franchisee ultimately filed for bankruptcy in March 2010. In addition, franchisees we interviewed noted difficulties meeting anticipated revenue estimates, as well as limited access to information that would aid in business planning. While some of the franchisees we interviewed who had not defaulted on their loan expressed similar challenges faced by those with defaulted loans, one of the franchisees with a nondefaulted loan told us he maintained excess capital in order to withstand slow periods, and he highlighted previous business experience. The experiences described in our interviews with the 22 franchisees are not generalizable to the broader population of franchisees, other franchise organizations, or 7(a) borrowers in general, but they provide additional background and highlight some of the difficulties experienced by these franchisees. We were unable to conclusively determine whether the loan agent referred to us for investigation intentionally provided exaggerated revenue projections to franchisees to help them qualify for SBA loans; however, we found that first-year projected revenues on loan applications involving the agent or her employer were, on average, more than twice the amount of actual revenue for 19 of the 24 franchisees we reviewed in the first year of operations. Our review of the allegation included obtaining information on SBA’s efforts to track and monitor loan agent involvement during the loan origination process. SBA has taken some steps to enhance oversight of loan agents and to improve the completeness and accuracy of data in its franchise loan portfolio. As part of our investigative work, we examined an allegation that a specific loan agent provided exaggerated revenue projections to some franchisees of the franchise organization in our review to assist them in qualifying for SBA-guaranteed loans. Potential franchisees and lenders may choose to employ loan agents to assist in the preparation of SBA loan applications. In an interview in February 2012, the loan agent told us she obtained the revenue projections from her employer and former clients, one of which she identified. The loan agent told us she provided these revenue projections to clients. The employer and former client she identified denied providing the revenue projections to the loan agent. SBA’s Office of Credit Risk Management debarred both the loan agent and her employer, and they are ineligible to work with the federal government for a period of 3 years beginning in January 2012. SBA debarred the loan agent on the basis of evidence supporting other grounds, including charging impermissible contingency fees, encouraging 7(a) loan applicants to violate SBA requirements by inflating working capital requests, and directing prospective borrowers not to disclose fees. In addition, the loan agent’s employer was debarred for impermissible contingency fees and encouraging false statements in connection with the 7(a) program. On the basis of interviews with the loan agent, her employer, eight former franchisees, and a bank officer for the loans, and our associated audit work we could not conclusively determine whether the loan agent intentionally provided misleading first-year revenue projections to SBA loan applicants of the franchise organization. To better understand the role of loan agents and the preparation of SBA loan applications, we interviewed three loan agents who were not the subject of the allegation we received. These three loan agents stated that they did not provide clients with revenue projections, and one of them said it would be improper to do so. The Federal Trade Commission’s Bureau of Consumer Protection (BCP) Buying a Franchise: A Consumer Guide encourages franchisees to conduct due diligence on any earning representations, including potential earning claims that the loan agent or other individuals may provide. During our review of the 88 loans files, we identified 6 loan agents, including the subject of the allegation, who assisted franchisees in preparing SBA loan applications. For SBA loans involving these loan agents, to the extent possible, we assessed the accuracy of franchisees’ first-year revenue projections on their SBA loan applications by comparing those figures to their actual first-year revenues using the franchise organization’s revenue data. First-year revenue projections on SBA applications that involved the loan agent we reviewed as part of our investigation were, on average, higher than the franchisees’ actual first- year revenue. The magnitude of this difference was also higher than what we found for other loan agents; however, the number of loans involving loan agents with available data to make this calculation was limited and the results are not statistically significant or generalizable to other SBA loan applications. Of the 88 SBA-guaranteed loans from the four lenders, we identified 24 franchisees with loans that indicated the loan agent referred to in the allegation, or her employer, assisted the franchisee in preparing the SBA loan application. Revenue projections from the loan application and actual revenue data from the franchise organization were available for 19 of these 24 franchisees, all of whom were owners of start-up franchises. On average, for these 19 franchisees, first-year revenue projections on their SBA loan applications were 2.7 times the actual revenues the franchisees made in their first year of operations. The first-year revenue projections for these 19 loans ranged from 1.02 times to 8.6 times the actual revenues the franchisee made in the first year of operations. In the 88 loan files we reviewed for the four lenders, we found 10 loans that involved a specific loan agent other than the one who was the subject of the allegation. We found first-year revenue projections in the loan files for 5 of these 10 loans. For these 5 loans, we compared the first-year revenue projections from the loan files to the actual revenue of the business during the first year of operations. The revenue projections for the five loans were, on average, 1.5 times the actual revenues the franchisees made in their first year of operations. The first-year revenue projections on the SBA applications for these five loans ranged from 1.03 times to 2.8 times the franchisees’ actual first-year revenues. In addition, federal regulations require franchise organizations to provide potential franchisees with certain information in their FDD—the disclosure document intended to aid individuals who are considering opening or purchasing a franchise. While the franchise organization can choose to include earnings statements in the FDD, federal regulations do not require franchise organizations to provide actual first-year average revenues for start-up businesses in the disclosure document. Franchisees should include first-year revenue estimates in an SBA loan application; however, this information is not necessarily available to potential franchisees in the FDD and they may have to conduct due diligence to identify this information from other sources, if available. For example, some franchisees we interviewed said they relied solely on information provided by the loan agent for developing revenue estimates. Other franchisees we spoke to highlighted different sources of financial information about the franchise organization, including existing or previous franchisees and the franchise organization’s FDD when developing revenue estimates. Several franchisees told us that they use FDDs when developing revenue estimates, but we found that the reported average revenue in the franchise organization’s FDD tended to be higher than our calculated first- year average revenues. We reviewed the FDDs of the select franchise organization in order to determine the average revenue it reported to potential franchisees. The franchise organization’s average revenue in its FDD accounted for all franchisees in operation the full calendar year before issuance of the FDD, not just first-year average revenue. We used the franchise organization’s revenue data to calculate, to the extent possible, first-year average revenues for only its start-up businesses. We then compared our first-year average revenue calculation to the average revenue figures reported in the franchise organization’s FDD over a 10-year period. For 9 of the 10 years we reviewed, the average revenue in the franchise organization’s FDD was higher than our average revenue calculations, after we excluded from our calculation all businesses the franchise organization told us were not start-up franchises. In addition, for the 10-year period the average revenue in the franchise organization’s FDD had a median value that was 1.43 times our average revenue calculation. In addition, we calculated the average revenue figure for the franchise organization, including the 63 businesses the franchise organization told us were not start-up businesses. The result of this calculation did not differ substantially from the franchise organization’s average revenues in the FDDs. See appendix III for additional details about our analysis. SBA’s website offers some information about the challenges of franchising, and it directs potential franchisees to the website of the Federal Trade Commission’s BCP for additional guidance. Likewise, the BCP’s Buying a Franchise: A Consumer Guide warns potential franchisees about unauthorized or misleading earning representations, highlighting the importance of franchisees conducting due diligence when applying for a franchise loan. According to SBA officials, SBA has limited interaction with franchisees because it delegates the application process to the preferred lenders. However, officials said individuals can visit one of SBA’s district offices, which provide resources for starting a business. Further, SBA has programs that are intended to help businesses start and grow by providing training, counseling, and access to resources, such as Small Business Development Centers, which provide services through professional business advisors. We identified other resources available to potential franchisees. For instance, a third party currently submits Freedom of Information Act requests for SBA franchise loan data, which it then uses to conduct franchise performance analysis. The analysis, which includes default rates and charge-off rates listed by franchise organization, is available to the public for a fee. To enhance oversight of loan agents, in October 2010, SBA announced it would begin requiring lenders to submit reports on fees paid to loan agents and other agents who assist borrowers during the loan origination process. SBA requires preferred lenders to submit a form, called the Fee Disclosure Form and Compensation Agreement (Form 159), which SBA officials said can be used to document information about participants in the loan origination process, including whether a borrower used a loan agent, and if so, the loan agent’s name, company, and compensation. Lenders submit Form 159 to SBA’s fiscal and transfer agent (FTA), who has been recording loan agent information on behalf of SBA since December 2010. Further, in March 2011, SBA published a notice with guidance to lenders on how to submit the form to the agency’s FTA, and the notice highlighted SBA’s efforts to create a database that would include all information on the form. SBA’s FTA maintains the database that includes information from the form. In addition, during our review, officials said SBA is adapting the form to obtain more-complete information about the role and activities of individuals who assist potential borrowers during the loan origination process, including loan agents. SBA plans to update the form in fiscal year 2014. Officials further stated that SBA has taken, and is considering, other steps to enhance oversight of loan agents. For example, SBA has added a provision in its standard operating procedures that allows the agency to fully deny liability in the event that the lender makes a loan on the basis of a loan package prepared by a debarred loan agent. In addition, the agency publishes a list of debarred individuals including loan agents on its website. Small Business Administration, Submission of Form 159 for 7(a) Loans, Information Notice Number 5000-1200. The NMLS is a record for nondepository, financial services licensing or registration for participating state agencies. This is the sole system of licensure for mortgage companies for 54 state agencies and the sole system of licensure for Mortgage Loan Originators for 58 state and territorial agencies. preferred lenders, since they have delegated authority over the loan origination process. During the course of our review, we identified discrepancies in SBA’s franchise loan data that highlight incomplete or inaccurate data in certain fields SBA uses for risk-based oversight of its loans portfolio, which SBA has initiated efforts to address. Using data from SBA’s Loan Accounting System (LAS), in our review of the 88 loan files for four lenders of the franchise organization with the highest loan volume and default rates, we found discrepancies between the loan files and LAS. These discrepancies generally represent two facets of data reliability—completeness and accuracy. For example, we found differences with respect to dates of defaults, default status, and whether the franchise was a start-up or existing business. Table 1 provides an overview of the discrepancies we identified. SBA officials said the agency takes steps to ensure the reliability of its loan data and has initiated efforts intended to improve the completeness and accuracy of some fields in LAS related to its franchise loan portfolio in general. Preferred lenders enter select data into LAS, and they certify that the information they enter into the system is accurate and complete, officials said. In addition, officials noted SBA assesses the accuracy of certain data fields when the lender submits a monthly loan status report or loan files to request a guarantee payment, and an external auditor reviews a sample of loans in LAS to validate that the financial data for the loans are accurate.vendor to improve the consistency of franchise information in its database by replacing SBA’s current franchise codes with publicly available identifiers used in the franchise industry, and to verify the accuracy of franchise information in LAS that lenders previously entered. As of July 2013, officials said the franchise identifiers were ready for use, and the agency planned to notify lenders about them. In addition, in August 2013, officials said they estimate the franchise identifiers will be introduced at the beginning of fiscal year 2014. SBA officials noted efforts to improve historical franchise data would be contingent on funding. Because SBA’s Officials also said SBA is working with a third-party franchise data-improvement efforts are in the early stages, it is too soon to assess whether SBA’s actions will address the issues with data reliability we identified. We provided a draft of this report to SBA for its review and comment. SBA provided technical comments, which were incorporated, as appropriate. We also provided relevant sections of a draft of this report to the four lenders who made loans to franchisees of the franchise organization. We received technical comments from three of the lenders and incorporated them, as appropriate, and one lender did not provide comments. In their comments, one of the lenders asked to be dropped from the report because of what it described as a relatively fewer number of loans and defaults compared to the other lenders. However, we included information on the SBA-guaranteed loans made by this lender to franchisees to provide more context and perspective on loans to franchisees. While relatively smaller based on the number of made and defaulted loans to franchisees than other lenders, it met our criteria of lenders with the highest number of loans and defaults. The only other lender with a comparable number of loans had one defaulted loan. In addition, we provided a draft of this report to the franchise organization for its review and comment. The representatives of the franchise organization provided comments on a draft of this report, which we have reprinted in appendix IV. In their comments, representatives of the franchise organization stated that our comparison of average revenues in the FDD and our first-year average revenue calculations is potentially misleading and inaccurate because the two sets of data being compared are not analogous. Specifically, representatives of the franchise organization stated that we are comparing two different sets of data and that we point out a significant difference in revenue without explaining the differences in such figures. The representatives requested that we more clearly state the differences between revenue information contained in the FDD and our calculations, which we did. However, we disagree with the representatives’ comments that our comparison is potentially misleading and inaccurate. The report comments on the use of the FDD for projecting first-year revenue, not on the accuracy of the average revenue reported in the franchise organization’s FDDs. Specifically, the report states that the average revenue in the FDD accounted for all franchisees in operation the full calendar year before issuance of the FDD. However, we added additional language to clarify that our calculation is of first-year average revenue obtained from reviewing additional revenue data we obtained from the franchise organization. Our comparison highlights the difficulties of using the FDD as a basis for projecting first-year revenues, since the revenue reported in the franchise organization’s FDD is derived from businesses in operation at least the full calendar year prior to issuance of the FDD. As noted in our report, while franchise organizations can choose to include earnings statements in the FDD, federal regulations do not require them to provide first-year average revenues for start-up businesses in the disclosure document. However, franchisees are required to include first-year revenue estimates in SBA loan applications, and this information is not necessarily available to potential franchisees in the FDD; thus, they may have to conduct additional due diligence to identify this information from other sources, if available. As noted in our report, several franchisees told us that they use FDDs when developing revenue estimates, and we found that the reported average revenue in the franchise organization’s FDD tended to be higher than our first-year average revenue calculations. The purpose of our analysis was not to assess the accuracy of the franchise organization’s reported revenues in the FDD, as the representatives suggest in their comments, but to demonstrate how the FDD figures were, on average, higher than our first- year average revenue calculations. In addition, as noted in the report, our calculation of average revenues including existing businesses did not vary substantially from the franchise organization’s figures. Representatives of the franchise organization requested that we state more clearly in the text of the report that we did not identify a substantial difference between our average revenue calculations and the franchise organization average revenue in the FDD when including existing businesses. We modified our report to more clearly state this information. However, by excluding existing businesses in our calculation, to the extent possible, we highlighted how average revenues disclosed in the franchise organization’s FDDs tended to be higher than first-year average revenues, which we believe is material to our discussion about the importance of franchisees’ conducting due diligence when applying for a 7(a) loan. The franchise organization agreed that potential franchisees must be careful in using information in the FDD for estimating first-year revenue. In addition, the representatives of the franchise organization noted additional provisions in the FDD that address how a prospective franchisee can gather further information. We modified the report to include language to address this issue. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the acting Administrator of the Small Business Administration, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions concerning this report, please contact Stephen M. Lord at (202) 512-6722 or lords@gao.gov or Wayne A. McElrath at (202) 512-6722 or mcelrathw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This report describes (1) the magnitude of Small Business Administration (SBA)-guaranteed loans to franchisees of the franchise organization, and (2) the results of our investigation into the allegation, and aspects of SBA’s oversight of its 7(a) loan program with respect to loans made to franchisees of the franchise organization. To conduct our audit work, we examined data for all SBA-guaranteed loans to franchisees of the select franchise organization approved from January 1, 2000, to December 31, 2011, in order to assess loan volume, default rates, and the amount of SBA’s guarantee payments made for 170 loans to franchisees of the franchise organization from 54 lenders. We selected this date range in order to obtain a broad understanding of SBA- guaranteed loans to the franchisees during different economic conditions and from multiple lenders. The original dataset we received from SBA included 184 loans to franchisees; however, 16 of these indicated that the lender canceled the guarantee on the loan and 2 were outside the scope of our review. We therefore excluded these loans from our analysis of SBA’s loan data for the franchise organization. From these data, we selected four lenders with the highest loan volume and default rates. Three of these four lenders are preferred lenders that have delegated authority to make lending decisions on behalf of SBA. We also reviewed 88 SBA loan packages for these four lenders in order to assess characteristics of individual loans, such as the extent to which the franchisees’ projected first-year revenues differed from actual first-year revenues, and to assess the accuracy of certain data fields in the SBA franchise loan data. These loan packages included all loan packages for these lenders during this time period. We obtained copies of SBA’s risk- based review reports for the four lenders. We also searched the website PACER.gov to determine if any of the franchisees that received 1 of the 88 SBA loans filed for bankruptcy.information from the four lenders when available. To assess the reliability of the SBA franchise loan data for the franchisees of the franchise organization, we (1) interviewed agency officials knowledgeable about the data, (2) performed electronic testing for completeness and accuracy on select data fields, and (3) traced fields in SBA’s loan database to primary source files when possible. We found discrepancies between data in SBA’s loan database and information in the loan files we reviewed. We discussed reasons for the differences between the data sources, as well as the agency’s processes and policies for managing the quality of franchise loan data, with SBA officials. After discussions with SBA, we determined the SBA loan data were sufficiently reliable for reviewing loans to the franchise organization. Moreover, we analyzed revenue data that we obtained from the franchise organization to calculate actual first-year revenues of franchisees, when possible. We compared these calculations with the projected first-year revenues in SBA loan applications for 19 franchisees who used the loan agent, or her employer, who was the subject of the allegation. We also used the franchise organization’s revenue data to calculate average first- year revenues for a broader population of franchisees, and compared it to average revenues reported in the franchise organization’s disclosure documents. We noted several data limitations with the franchise organization’s revenue data. The scope of our review included businesses with a full 12 months of revenue data that began from years 2000 to 2011, since our objective was to calculate an entire year of business revenue for businesses that opened during that time period. The original revenue data provided by the franchise organization included 746 businesses. After excluding 59 businesses with fewer than 12 months of revenue data and 149 businesses that may have opened prior to January 2000 (16 businesses affected by both of these issues), the total population of businesses in the revenue data was reduced to 554. In addition, we identified 115 businesses with revenue data that highlighted potential reliability issues, including missing, duplicate, and nonsequential revenue data. For purposes of data reliability, we excluded these businesses from our calculations, and conducted analysis on the remaining 439 businesses. To the extent possible, we calculated an average revenue figure that reflected first-year revenue of start-up We obtained franchise loan franchisees. Accordingly, for part of our analysis, we excluded 63 businesses from the revenue dataset that the franchise organization identified as existing businesses, for a total population of 376 businesses. The franchise organization was not able to confirm that it identified all existing businesses, so our average revenue calculations may include both start-up franchisees and existing franchisees. Nonetheless, we believe the average revenue figures we calculated provide a reasonable basis of comparison to projected revenues for select start-up franchisees, as well the franchise organization’s disclosure documents. We discussed this methodology with representatives of the franchise organization, who confirmed our approach was reasonable. To further assess the reliability of the revenue data, we interviewed representatives of the franchise organization and performed electronic testing on the data provided. We determined that the franchise organization’s revenue data were sufficiently reliable for the purposes of this report. We also interviewed SBA officials about their activities related to oversight of the four lenders, efforts to track and monitor loan agents, and the assistance provided to potential franchisees during the loan application process. In addition, we examined SBA’s policies and procedures for overseeing lenders in the 7(a) program. We also reviewed reports by SBA’s Office of Inspector General (OIG) and other related documents. To conduct our investigative work, we reviewed an allegation that a loan agent intentionally exaggerated first-year revenue projections on SBA loan applications in order to ensure that franchisees would qualify for SBA 7(a) loans. We interviewed the owner of the franchise organization, the loan agent who was the subject of the allegation, her employer, eight former franchisees that were referred to us during the course of the investigation, and a bank officer that reviewed loans related to the allegation. To better understand the franchisees’ experience with the 7(a) loan program, we interviewed 14 additional franchisees of the select franchise organization who received 19 SBA-guaranteed loans from one of the four lenders with the highest loan volume and default rates. These franchisees were selected on the basis of a range of factors, including whether they used a loan agent, geographic dispersion, and performance status of the loan. We also interviewed three additional loan agents. In addition, on the basis of the 88 loan packages we reviewed for the four lenders, we identified 24 franchisees that used the loan agent connected to the allegation, or her employer, 19 of whom had data available to compare the first-year revenue projections on their SBA loan applications with the franchisees’ actual first-year revenue. We cannot generalize our findings from these interviews to other franchisees, loan agents, franchise organizations, or borrowers in the 7(a) program. Our intent was not to identify potential fraud or abuse for all franchise loans of the franchise organization or the 7(a) loan program as a whole. We conducted this performance audit from March 2012 to September 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence that provides a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with the standards prescribed by the Council of the Inspectors General on Integrity and Efficiency. As part of its oversight efforts, the Small Business Administration (SBA) generally conducts reviews of all 7(a) lenders with SBA-guaranteed loan portfolios of $10 million or more on a 12- to 24-month cycle. SBA can conduct additional reviews of the lenders if it identifies specific performance concerns. Officials highlighted additional factors that could determine whether a lender is subject to a risk-based review, including the lenders’ risk ratings, industry concentration, and the results of previous reviews. SBA conducted risk-based reviews of the four lenders we selected for further review. Table 2 summarizes the risk-based reviews we received from SBA for the four lenders. SBA has the authority to suspend, revoke, or deny renewal of or issue a shortened period for delegated authority of preferred lenders. According to officials, SBA suspended or revoked the delegated authority for two preferred lenders in the 7(a) program from 2000 through 2012. Rather than suspending or revoking delegated authority, officials said SBA is more likely to deny a renewal of delegated authority or grant a shortened renewal period, since the renewal period can be from 6 months to 2 years. From fiscal year 2009 through 2012, SBA denied delegated authority to 367 lenders. In addition, approximately 1,058 lenders received at least one shortened renewal of 6 to 12 months. SBA can also place lenders on a “watch list,” which is one of SBA’s monitoring tools to identify high-risk lenders that warrant elevated oversight attention. Officials told us that high-risk lenders on the watch list include institutions that have received a review assessment of less than acceptable with corrective actions required and marginally acceptable with corrective actions required. According to SBA, it is developing a new lender oversight framework to conduct risk-based reviews. This new framework of risk-based reviews is intended to measure the level of risk of each lender participating in the 7(a) program. As part of this effort, SBA officials said that they plan to conduct a pilot project to review 20 to 30 lenders, which is to include evaluations of issues related to loan agents and franchisees. As of August 2013, SBA had completed 18 pilot reviews. In addition, SBA officials said the agency will conduct various types of risk-based reviews based on issues unique to a particular lender. The franchise organization we reviewed included average revenue in the FDD that accounted for all franchisees, including both start-up and existing franchisees in operation the full calendar year before issuance of the FDDs. We obtained and analyzed the franchise organization’s revenue data to calculate first-year average revenues, to the extent possible, and compared them to average revenues reported in the franchise organization’s FDDs. We found the average revenues reported in the franchise organization’s FDDs from 2000 to 2009 were higher than our average revenue calculations, with the exception of 1 year. Specifically, for the 10-year period of FDDs we reviewed, the average revenue in the franchise organization’s FDD had a median value that was 1.43 times our average revenue calculation. The average revenue in the FDD for 1 of the 10 years was lower than our average calculation (our average revenue calculation was 90 percent of the FDD for that year). However, for the other 9 years, the FDD was at least 1.35 times and at most 1.74 times the average revenue figures we calculated. As discussed, current federal regulations stipulate that franchise organizations have discretion in what they report in the section of the FDD that is devoted to earnings statements, provided there is a reasonable basis and written substantiation for the information. All of the FDDs we reviewed for the franchise organization cautioned potential franchisees that they may not achieve the average revenue reported in the FDD and that many factors influence the revenue of the franchise. These FDDs also note that the potential franchisees accept the risk of not achieving the stated average revenue, and that the franchise organization has not audited nor in any other manner substantiated the truthfulness, accuracy, or completeness of any information supplied by its franchisees. In addition to the contacts named above, Heather Dunahoo, Assistant Director; Rick Hillman; Maria McMullen; Linda Miller; Gloria Proa; Gavin Ugale; Elizabeth Wood; and Heneng Yu made key contributions to this report.
From fiscal years 2003 to 2012, SBA guaranteed franchise loans under its 7(a) program totaling around $10.6 billion. SBA made guarantee payments on approximately 28 percent of these franchise loans, representing about $1.5 billion, according to SBA. GAO was asked to examine SBAguaranteed loans to franchisees, and to investigate an allegation that a loan agent provided exaggerated revenue projections to franchisees of the same franchise organization to help them qualify for SBA loans. This report describes (1) the magnitude of SBA-guaranteed loans to franchisees of the franchise organization, and (2) the results of GAO's investigation into the allegation, and aspects of SBA's oversight of its 7(a) loan program with respect to loans made to franchisees of the franchise organization. GAO examined SBA's loan data and files of loans made to franchisees. GAO used the franchise organization's revenue data to compare to revenue projections in SBA applications, as well as revenue reported in the organization's disclosure documents. As part of the investigative work, GAO interviewed the franchisor, loan agents, and select borrowers to better understand the franchising experience. GAO is not making any recommendations. In their comments, representatives of the franchise organization state that GAO's comparison of average revenue in the disclosure document and the first-year average revenue calculation is potentially misleading and inaccurate. GAO disagrees, as discussed in more detail in this report. Analysis of guaranteed loans to franchisees of a select franchise organization reviewed by GAO, approved from January 1, 2000, to December 31, 2011, showed the Small Business Administration (SBA) approved a total of about $38.4 million for 170 loans made by 54 lenders. SBA's guaranteed portion on these loans was approximately $28.8 million. Of the total population of 170 loans, 74 loans defaulted, 55 of which (74 percent) originated from four lenders that had the highest loan volume and default rates on loans to the franchisees. SBA made guarantee payments of around $11 million on the defaulted loans to franchisees, including about $8.5 million in guarantee payments on the 55 defaulted loans from these four lenders. Of the 88 loans reviewed from the four lenders, 55 (63 percent) defaulted. In comparison, 19 of the 82 loans (23 percent) that originated at the other 50 lenders to the franchisees defaulted. As part of GAO's investigative work, GAO interviewed the owners of 22 franchisees of the franchise organization in GAO's review, of which 16 defaulted on their loans and 10 filed for bankruptcy protection. Interviewed franchisees noted difficulties meeting anticipated revenue estimates and limited access to information that would aid in business planning. GAO was unable to conclusively determine whether the loan agent referred to GAO for investigation intentionally provided exaggerated revenue projections to franchisees to help them qualify for SBA loans, and SBA has taken initial steps to enhance program oversight. The loan agent stated that she obtained the revenue projections from her employer and former clients, one of which she identified. She then provided these revenue projections to clients. The employer and former client she identified denied providing the revenue projections to the loan agent. SBA's Office of Credit Risk Management debarred the loan agent and her employer for encouraging false statements, among other things, making them ineligible to work with the federal government for a period of 3 years beginning in January 2012. According to GAO's analysis, the first-year projected revenues on loan applications involving the loan agent or her employer were, on average, more than twice the amount of actual first-year revenue for 19 of the 24 franchisees reviewed. Potential franchisees should include first-year revenue estimates in their SBA loan applications. However, this information is not necessarily available to potential franchisees in the franchise organization's disclosure document, which provides information about the organization's financial performance representations and franchisees' estimated initial investment, among other things. Further, federal regulations do not require franchise organizations to provide actual first-year average revenues for start-up businesses in their disclosure document. Thus, potential franchisees may have to conduct due diligence to identify this information from other sources, if available. GAO also identified discrepancies and other issues in SBA's franchise loan data with respect to fields used for risk-based oversight of its loans portfolio, such as default status, number of loans, and loan agent information. SBA has taken, or is considering steps, to address these issues and enhance oversight of loan agents. For instance, SBA is working with a third-party vendor to replace SBA's current franchise codes with publicly available identifiers used in the franchise industry. At the time of GAO's review, it was too early in the process to assess the effectiveness of these actions.
Nonpoint source pollution can result when water, such as precipitation, runs over land surfaces and into bodies of water. Significant nonpoint sources of pollution can include paved urban areas, agricultural practices, forestry, and mining. However, in urban and suburban areas, this runoff generally enters a sewer system that can be regulated as a point source of water pollution. For example, precipitation from rain or snowmelt may run into a municipal separate storm sewer system (MS4 or storm sewer) that eventually discharges into a body of water. The precipitation may also run into a combined sewer system, which carries a combination of storm water runoff, industrial waste, and raw sewage in a single pipe to a sewage treatment facility for discharge after treatment. Lastly, the precipitation may run off of land or paved surfaces directly into nearby receiving waters. EPA’s Office of Wastewater Management, which is within the Office of Water, implements the National Pollutant Discharge Elimination System (NPDES) Program. The program was created in 1972 with the passage of the Clean Water Act. Created to control water pollution from point sources—those sources, such as a factory or wastewater treatment plant, that contribute pollutants directly into a body of water from a pipe or other conveyance—the NPDES Program did not specifically address storm water discharges. In 1987, the Congress amended the Clean Water Act with the Water Quality Act, which directed EPA to also control storm water discharges that enter MS4s—essentially requiring EPA to treat such storm water as a point source. MS4s are defined as those sewers that collect and convey storm water; are owned or operated by the federal, state, or local government; and are not part of a publicly owned treatment (sewage) facility. To regulate urban storm water runoff, EPA published regulations in 1990 that established the NPDES Storm Water Program and described permit application requirements. According to EPA, the program’s objective, in part, is to preserve, protect, and improve water quality by, among other things, controlling the volume of runoff from paved surfaces and by reducing the level of runoff pollutants to the maximum extent practicable using best management practices (BMP). The 1987 act also authorized EPA to implement a program that provides federal funds and technical assistance to states to develop their own nonpoint source pollution management programs. States can use the federal funds they receive for nonpoint source programs to address nonpoint sources of pollution as well as urban runoff. Currently, EPA manages NPDES Storm Water programs in six states (Alaska, Arizona, Idaho, Massachusetts, New Hampshire, and New Mexico) and has delegated authority to the remaining 44 states to manage these programs. The storm water program is being implemented in two phases. Local governments meeting the following criteria must comply with EPA’s storm water program regulations. First, Phase I of the program requires that municipalities with a population of 100,000 or more obtain a permit for their MS4 system; second, the program requires that entities obtain a permit if they discharge storm water from sites with industrial activities, including construction activities that disturb 5 acres or more of land. In addition, NPDES permitting authorities may also bring other municipalities and industrial entities into the program if they deem it necessary. Municipalities that meet these conditions must submit a permit application to EPA or the governing regulatory state agency. In 1990, the regulations specifically identified 220 municipalities throughout the United States that were required to apply for a Phase I permit. According to EPA, as of April 2001, about 256 Phase 1 MS4 permits had been issued and about 17 more still needed to be issued. Because some permits cover more than one municipality, these permits cover about 1,000 medium and large municipalities nationwide. The final rule for Phase II of the program was issued in December 1999. Phase II extends Phase I efforts by requiring that a storm water discharge permit be obtained by (1) operators of all MS4s not already covered by Phase I of the program in urbanized areas and (2) construction sites that disturb areas equal to or greater than 1 acre and less than 5 acres of land. As with Phase I of the program, permitting authorities may require additional small MS4s and construction sites to obtain a permit if they are a significant contributor of pollutants. Currently, EPA anticipates that about 5,000 municipalities may be subject to permitting requirements under Phase II of the storm water program. These municipalities are required to obtain permits no later than March 10, 2003. EPA also regulates combined sewer overflows (CSO) that can be caused by urban storm water runoff. Combined sewer systems, in which storm water enters pipes already carrying sewage, may overflow when rain or snowmelt entering the system exceeds the system’s flow capacity. In the CSO that results, the mixture of untreated sewage and runoff bypasses the water treatment facility and is diverted directly into receiving waters. (See fig. 1 for an illustration of combined and separate sewer systems.) These combined systems generally serve the older parts of approximately 900 cities in the United States. Pipes carrying sewage and storm water separately generally serve newer parts of cities. EPA’s 1994 CSO policy requires communities with combined sewer systems to take immediate and long-term actions to address CSO problems. The policy contains provisions for developing appropriate, site-specific NPDES permit requirements for all combined sewer systems that overflow because of wet- weather events. The Wet Weather Water Quality Act of 2000 requires that any permit, order, or decree issued for a CSO conform to the 1994 policy. Under this act, EPA is also required to submit a report to the Congress by September 2001 on the status of the program. The Total Maximum Daily Load (TMDL) Program, established under the Clean Water Act, is intended to address water bodies that do not meet water quality standards because of pollutant loadings from point and nonpoint sources. Currently, it is unclear how and when this program will affect EPA’s and states’ issuance of storm water permits. A TMDL is a calculation of the maximum amount of a pollutant that a body of water can receive and still meet the water quality standard set by the state. Under EPA’s regulations, the state is to allocate this “pollutant load” among the point and nonpoint pollutant sources that flow into the water body and then take steps to ensure that no source exceeds its assigned load. In 1996, EPA issued a policy that outlined an interim approach to including water quality standards in storm water permits. The policy promoted the use of BMPs in the first 5-year term permits, followed by a tailoring of BMPs in the second round of permits as necessary to comply with water quality standards. Until recently, few TMDLs had been established, and citizen organizations sued EPA for its lack of action. EPA issued a new set of regulations for the TMDL Program in 2000, but the Congress prevented EPA from spending money to implement the rule in 2000 and 2001. It is possible that establishing a TMDL for a body of water could result in the application of a numeric effluent limit to outfalls that release storm water into that body of water. Some city officials we spoke with generally felt that numeric effluent limits would significantly increase the cost of managing storm water. Since World War II, urban runoff has increased throughout the United States. This increase is directly related to growth in the amount of impervious surfaces due to urban and suburban development and the construction of roads, highways, and other impervious surfaces. Coinciding with this growth in impervious surfaces has been a reduction in wetlands and in the amount of storm water that infiltrates the ground to recharge aquifers. Moreover, the loss of vegetation due to development and related runoff can cause major erosion. Ultimately, much of this runoff is channeled into gutters, storm drains, and paved channels, and vegetation and sediment removed with the runoff may end up in receiving waters. EPA has identified urban storm water runoff as one of the leading sources of pollution to the nation’s rivers, streams, lakes, and estuaries. Runoff from impervious surfaces picks up potentially harmful pollutants and carries them into receiving waters. Studies have shown that urban runoff and the pollutants it carries can negatively affect water quality, aquatic life, and public health. According to the U.S. Department of Agriculture, between 1945 and 1997, urban land area increased by almost 327 percent, from 15 million acres to about 64 million acres in the contiguous 48 states. From 1992 through 1997, the annual rate of development averaged about 1 million acres per year. The land developed between 1945 and 1997 came primarily from forestland and pasture and range. For example, according to the Bureau of the Census, between 1960 and 1990, the amount of land used for urban purposes in Baltimore, Maryland, and Washington, D.C., grew by about 170 percent and 177 percent, respectively. As a result, urbanization, with its accompanying expansion of impervious surfaces like sidewalks, roofs, parking lots, and roads, has significantly increased the nation’s total developed land and paved surface area. Figure 2 demonstrates the growth in the urbanized areas of Baltimore and Washington, D.C., over the last half of the 20th century. The increase in paved surfaces has been spurred not only by urban and suburban development, but also by a steady increase in the use of automobiles, the primary mode of daily transportation for most Americans. Roads also play an important role in the economy of the United States, since trucks carry about 75 percent of the value of all goods shipped. According to EPA, paved road mileage in the United States increased by 278 percent from 1945 to 1997. In 1945, 19 percent of the public roads in the country were paved; by 1997, that percentage had increased to 61. (See fig. 3.) According to a 1999 study, motor-vehicle infrastructure, such as roads and parking lots, accounts for close to half of the land area in U.S. urban cities. The increase in impervious surfaces over the past several decades has led to an increase in storm water runoff. In part, this has occurred because highways and other developments have reduced the amount of wetlands and other undeveloped land. Wetlands mitigate the effects of storm water runoff by acting as a natural form of flood control, facilitating sediment replenishment, and improving water quality by removing excess nutrients and other chemical contaminants before the contaminants can affect receiving waters. According to a 2000 EPA report, of the 12 states that listed wetland losses, six reported that they had significant losses due to highway construction, and 10 reported that they had significant losses due to residential growth and development. However, the effect of road building on wetland loss has been reduced in recent years. According to a Federal Highway Administration (FHWA) official, since 1996, wetlands have been replaced and restored under the Federal-Aid Highway Program at an average rate of 2.7 acres for every acre lost to highway building. Other undeveloped land with vegetation also performs some of the roles that wetlands play in managing runoff, although to a lesser extent. Furthermore, as impervious surfaces increase, less storm water is able to infiltrate through the soil to groundwater. Impervious areas allow only a very small amount of initial infiltration compared with unpaved areas whose infiltration capacity varies, depending on the soil type. Figure 4 demonstrates EPA’s estimates of the impact of impervious surfaces on the percentages of storm water that runs off, infiltrates the ground, and is lost through evapotranspiration. When natural ground cover is present over an entire site, normally 10 percent of precipitation runs off the land into nearby creeks, rivers, and lakes. In contrast, when a site is 75- to 100- percent impervious, 55 percent of the precipitation runs off into these receiving waters. However, according to an FHWA official, the runoff rates can be reduced if developers take mitigating actions to develop and implement BMPs to control flooding or runoff. The decrease in storm water infiltration that accompanies urbanization also reduces the amount of water that is available to recharge groundwater supplies. For this reason, reduced infiltration may lead to problems with the water table in certain urban areas. For example, a Massachusetts Department of Environmental Protection official noted that a low recharge rate affects water quality because it can result in a loss of wetlands and adversely affect aquatic habitat as water-table levels fall during dry weather. In addition, officials from the Charles River Watershed Association in Massachusetts are concerned that the lack of infiltration might cause some communities to run short of drinking water in the next 20 years. Urban runoff can adversely affect the quality of the nation’s waters, and urban storm water runoff has been identified as one of the leading sources of pollution to rivers, streams, lakes, and estuaries. Section 305(b) of the Clean Water Act requires states and other jurisdictions to report on the quality of their waters to EPA every 2 years. The 1998 National Water Quality Inventory Report to Congress showed that 35 percent of assessed river and stream miles, 45 percent of assessed lake acres, and 44 percent of assessed estuarine square miles were impaired in terms of their ability to support uses such as aquatic life, swimming, and fish consumption. The report identified urban storm water runoff as one of the leading sources of impairment to the assessed waters. Studies have shown that as the percentage of impervious cover increases within a watershed, biodiversity also declines. Research conducted by the Center for Watershed Protection found that, generally speaking, when a watershed has 10 percent or less impervious cover, the associated stream can be categorized as sensitive. Sensitive streams are characterized as having high fish diversity and good water quality. Once the percentage of impervious cover exceeds 25 to 30 percent of the watershed, however, streams tend to become nonsupporting. Nonsupporting streams are highly unstable, have poor diversity of fish and aquatic life, and have poor water quality. For example, one study evaluated the relationship between the extent of impervious cover in watersheds to the number and diversity of fish populations in 47 small streams in southeastern Wisconsin between the 1970s and 1990s. The results revealed that the number of fish species per site was highly variable for drainage areas that had less than 10-percent imperviousness. In contrast, sites that had greater than 10-percent imperviousness had consistently low numbers of fish species. Other studies have associated urban runoff with basic changes in the receiving body of water. Runoff can carry sediment into surface water, and this sediment can carry contaminants, harm aquatic plants, and smother organisms. Runoff can also be warmed by the impervious surfaces it flows across. When sufficient amounts of warmed runoff enter a water body, the water temperature can rise. Less oxygen is then available for aquatic organisms because water holds less oxygen as it becomes warmer. These combined factors lead to the degradation of aquatic habitat. According to EPA, the common effects of these types of pollution on aquatic life include a decline in biodiversity and an increase in invasive species. An increase in the volume of storm water runoff also increases the likelihood of erosion, which allows for transport of eroded sediment downstream into receiving waters. For example, during a site visit, we observed extensive erosion along the Gingerville Creek Subbasin in Anne Arundel County, Maryland, that was caused by urban runoff channeled into the creek. Figure 5 depicts the eroded banks and channel of this creek. There have been several efforts to characterize the chemicals and other constituents in urban runoff. The Nationwide Urban Runoff Program, conducted by EPA between 1978 and 1983, examined the characteristics of urban runoff. Another federal effort to characterize urban runoff is an ongoing joint project of the U.S. Geological Survey (USGS) and the FHWA to evaluate guidelines for highway runoff. As table 1 indicates, these studies and others have shown that the principal contaminants found in urban runoff include nutrients, solids, pathogens, metals, hydrocarbons, organics, salt, and trash. Water flowing over various surfaces, such as streets, parking lots, construction sites, industrial facilities, rooftops, and lawns, carries these pollutants to receiving waters. The contaminants have the potential to impair water quality, degrade aquatic ecosystems, and pose health risks to swimmers. In our visits to cities with Phase I permits and their watersheds, we identified specific instances in which these contaminants had affected water quality. The Chesapeake Bay, for example, has been polluted with the nutrients nitrogen and phosphorus and with excess sediment caused, in part, by urban runoff. The excess nutrients cause algae blooms that block sunlight from reaching bay grasses—which are a source of food, shelter, and nursery grounds for many aquatic species. In an effort to control nutrient pollution in the Chesapeake Bay, the Executive Council of the Chesapeake Bay Program established a goal to reduce the nitrogen and phosphorus entering the Chesapeake Bay by 40 percent, including through control of runoff from urban areas. In addition, an assessment of the status of chemical contaminant effects on living resources in the bay’s tidal rivers found “hot spots” of contaminated sediment. As a result, the Baltimore Harbor and the Patapsco River in Maryland; the Anacostia River in Washington, D.C.; and the Elizabeth River in Virginia were designated as “regions of concern.” Urban storm water runoff is a significant source of contaminants in the three regions. The Chesapeake Executive Council has committed to reduce by 30 percent the chemicals of concern in the regions of concern by 2010 through pollution prevention measures and other voluntary means. Pathogens such as bacteria and viruses, which are often present in urban runoff, can pose public health problems. For example, the Santa Monica Bay Restoration Project conducted a study to identify adverse health effects of untreated urban runoff by surveying over 13,000 swimmers at three bay beaches. The study established a positive association between an increased risk of illness and swimming near flowing storm-drain outlets. Table 2 explains health outcome measures at various distances from storm drains. For example, the study found a 1-in-14 chance of fever for swimmers in front of the drain versus a 1-in-22 chance at 400 or more yards away. Metals and polycyclic aromatic hydrocarbons (PAH) in urban runoff can present a threat to aquatic life. Studies have found the following: Storm water runoff from an urban area proved to be toxic to sea urchin fertilization in the Santa Monica Bay, and dissolved zinc and copper were determined to be contributors to this toxicity. Brown bullheads (a bottom-dwelling catfish) in the Anacostia River developed tumors that were believed to be caused by PAHs associated in part with urban runoff. High PAH and heavy metal concentrations were found in crayfish tissue samples from several urban streams in Milwaukee. The study associated these contaminants with storm water runoff. In addition, USGS tracked trends in the concentrations of PAHs found in sediment in 10 lakes and reservoirs in six metropolitan areas over the last several decades. This study found that PAH concentrations in developed watersheds are increasing and that these increases may be linked to the amount of urban development and vehicle traffic in urban and suburban areas. For example, from 1982 to 1996, PAH concentrations in the sediment core in Town Lake (Austin, Texas) and total miles driven in greater Austin both increased by about 2.5 times. Figure 6 illustrates this correlation. Although the studies we reviewed show that certain contaminants are likely to be present in urban runoff, factors such as land development practices, climate conditions, atmospheric deposition, and traffic characteristics all can affect the characteristics of runoff from a particular area. Therefore, given the diffuse nature of many storm water discharges and the variability of other contributing factors, characterizing the concentrations of pollutants contained in storm water runoff has been challenging. Recent USGS reports also suggest that improvements are needed in the methods used to analyze sediment and metals in runoff. To comply with federal and state storm water management for Phase I permitting requirements, permitted municipalities must create and implement storm water management programs. The three primary activities used in these programs include efforts to characterize storm water runoff; BMPs aimed at reducing pollutants in storm water runoff to the maximum extent practicable; and reporting program activities, monitoring results, and costs of implementing the program. Some BMPs are structural—meaning that they are designed to trap and detain runoff until constituents settle or are filtered out. Other BMPs are nonstructural—meaning that they are designed to prevent contaminants from entering storm water through actions like street sweeping and inspections. Many permitted municipalities use specialized BMPs tailored to address particular runoff problems in their locations. Over 1,000 cities are undertaking these efforts under the NPDES Storm Water Program, but information on the overall costs of managing urban runoff and the effectiveness of the actions taken is limited. EPA’s attempts to forecast costs have not encompassed the entire program or are out of date. In addition, the permitted municipal agencies we visited estimated their annual storm water management costs and reported them to state agencies or EPA, but the approaches they used to calculate these estimates varied considerably, making it difficult to draw any conclusions. Although EPA and state agencies believe that the program will be effective in improving water quality, EPA has not made a systematic effort to evaluate the program. Without such an effort, EPA cannot tell what effect the program is having on water quality nationally. The NPDES Storm Water Program requires municipalities operating under a Phase I MS4 permit to characterize and monitor storm water runoff, implement BMPs to reduce pollutants to the maximum extent practicable, and report costs and monitoring results to the permitting authorities. Because of these requirements, local governments have generally shifted the focus of their storm water management from water quantity control or flood management to water quality concerns. Besides following the basic federal requirements, municipalities must follow any additional regulations developed by states that have been delegated the authority to manage the NPDES Storm Water Program. For example, Wisconsin’s Department of Natural Resources broadened the requirements for determining which municipalities must get permits. The state requires local governments with storm sewer systems in priority watersheds (based on the significance of storm water runoff as a pollutant source) that serve a populace of 50,000 or more to obtain a permit with requirements similar to those for a Phase I permit. Wisconsin’s Department of Natural Resources also requires municipalities that are located in one of the state’s five Great Lakes Areas of Concern to obtain a state permit. Furthermore, in line with specific criteria in Wisconsin’s Administrative Code, the state requires other municipalities to obtain a permit if the municipality is found to significantly contribute storm water pollutants to waters of the state. These various requirements increased the number of municipalities that must get permits from the two under federal requirements to over 70 under the states’ requirements. The local governments we reviewed were undertaking three primary activities when applying for permits and implementing their storm water management programs. Specifically, these activities were (1) characterizing storm water runoff; (2) developing BMPs to reduce discharges of pollutants to the maximum extent practicable; and (3) reporting program activities, monitoring results, and reporting program costs. First, to characterize runoff, applicants are to provide quantitative data that describe the volume and quality of discharges from municipal storm sewers. For example, cities must map all storm sewer outfalls—an undertaking that one group representing cities described as significant. After the permit application is approved, additional monitoring is required throughout the life of the permit to facilitate the design of effective storm water management programs and to document the nature of the storm water. The local governments we visited were all monitoring for a variety of purposes, including characterizing runoff from different types of land use in order to target their BMPs, testing the effectiveness of a particular BMP, or establishing a baseline for their storm water quality evaluations. Second, the storm water management programs that local governments develop focus on implementing BMPs. While active treatment, such as sending storm water through a treatment facility, is a possible BMP, the cities we visited were generally not using active treatment. EPA’s February 2000 report on the Phase I program described the program as based on the “use of low-cost, common-sense solutions.” The five cities we visited were generally using similar types of structural and nonstructural BMPs, as follows: Structural BMPs are designed to separate contaminants from storm water. For example, detention ponds temporarily hold storm water runoff to allow solids and other constituents in the runoff to settle before the water is released at a predetermined rate into receiving waters. In addition, catch-basin inserts, placed in a storm drain, catch trash and other debris, and particle separators, placed beneath the surface of an impervious area such as a parking lot, separate oils from runoff and allow sediment and debris to settle. Structural devices such as these require regular maintenance to function properly and remain effective. Nonstructural BMPs are primarily designed to minimize the contaminants that enter storm water. These nonstructural BMPs include “good housekeeping” practices by the local government, such as oil collection and recycling, spill response, household and hazardous waste collection, pesticide controls, flood control management, and street sweeping; public education programs, such as storm-drain stenciling, to remind the public that trash, motor oil, and other pollutants thrown into storm drains end up in nearby receiving waters; new ordinances to control pollution sources, such as prohibiting the disposal of lawn clippings in storm drains and requiring pet owners to clean up after their pets; requirements that developers comply with storm water regulations and incorporate erosion and sediment controls at all new development sites; requirements that runoff from properties owned or activities sponsored by the municipality be properly controlled; and efforts to identify and eliminate illicit connections and illegal discharges to the storm sewer systems, such as those from pipes carrying sewage. We found that the NPDES Program’s requirements allowed local governments to tailor their storm water management efforts to prioritize local concerns, such as a particular type of contaminant, a particular climatic condition, or a particular body of water. Some cities also developed specialized BMPs to address these concerns. The following information highlights specific storm water-related concerns in the five cities we visited and the specialized BMPs these municipalities have developed to address these particular concerns. (See apps. I to V for additional information on these cities’ storm water management programs.) In Baltimore, Maryland, excessive levels of nutrients, particularly phosphorus and nitrogen, are among the city’s major water quality concerns because of the city’s participation in the Chesapeake Bay Program. Baltimore City agreed to assist the state in reaching the Chesapeake Bay Program’s goal to reduce nutrients discharged to the bay by 40 percent by the year 2000. According to a Chesapeake Bay Program Office representative, as of March 2001, the program has not met this goal but expects to reach it within the next several years. In Boston, Massachusetts, the Boston Water and Sewer Commission, which holds the permit for Boston’s storm sewer system, is concerned about runoff from roadways, especially runoff containing salt and sand used in the winter months and dissolved metals (copper and zinc) from automobiles. In September 2000, the commission began a 3-year program to develop and implement a citywide catch-basin inspection, cleaning, and preventive maintenance program. The program will also include the development of a database and map that can be linked to the commission’s Geographic Information System. Los Angeles County, California, is responding to a TMDL for trash in the Los Angeles River Watershed that will require the county, over a 10- year period, to eliminate trash in runoff. The county is testing a variety of devices that remove trash from runoff and specialized catch-basin devices that are designed to prevent trash from ever reaching the storm sewers. Milwaukee, Wisconsin, changed its monitoring and public education activities in its recent permit to test the effectiveness of a BMP targeting public education efforts to a specific community. The new permit also requires a monitoring program aimed at the community, its associated watershed, and city employees who work in the area. Worcester, Massachusetts, had a significant problem with illicit connections to its storm sewers and with flow in these sewers during dry weather. Worcester’s Department of Public Works (DPW) screened 71 of its storm water outfalls and determined that 32 of them had drainage areas that carried both sanitary sewage and storm drainage in separate conduits through common manholes. DPW has retrofitted over 65 percent of the manholes to prevent sewage from mixing with storm water. Third, local governments participating in the Phase I program are required to report annually to EPA or the state regulatory agency on their storm water programs. These reports are to include a status report on the program; a summary of data, including monitoring results collected during the reporting year; information on annual expenditures on the program and a budget for the coming year; and a description of any water quality improvements or degradation. Good information about the cost of implementing federal storm water requirements is limited. EPA conducted a survey to estimate the nation’s future water infrastructure needs over a 20-year period—from 1996 to 2016. In its 1996 report, EPA estimated that states would require over $50 billion to meet their current (as of 1996) water infrastructure needs. The estimate consists of storm water management needs (at $7.4 billion) and CSO needs (at $44.7 billion). EPA noted, however, that estimated storm water management needs are likely too low and could increase following an analysis of data collected to prepare the agency’s 2000 clean water needs survey—to be released in 2002. According to EPA, many cities have implemented the Phase I program since EPA reported to the Congress in 1996, and municipalities should now be better able to provide documented cost data. As a result, EPA will need to rely less on modeled storm water needs than it did in the 1996 needs survey. EPA did not project the costs and benefits of the program when it was initiated; therefore, no initial cost estimates are available. When EPA promulgated the Phase I program regulations in 1990, the agency decided that the storm water program did not meet the requirements for preparing a benefit/cost analysis. The costs to local governments of complying with the Phase I program have generally been portrayed as high. However, because of inconsistencies in cost accounting and reporting practices, we could not determine the cost of the program to several of the cities we visited. Although municipalities are required to provide information on the expenditures that they anticipate will be needed to implement their storm water management programs for each fiscal year covered by the permit, EPA has not issued any cost reporting guidelines. Consequently, while the reported fiscal year 1999 total cost to manage and treat storm water runoff across the five municipalities in our review ranged from less than $1 million (Milwaukee) to $135 million (Los Angeles County), these numbers are not comparable because the municipalities did not have consistent cost accounting and reporting practices and did not fully express storm water management costs. For example, some cities reported only the costs of activities that were funded by the city department that held the permit. Significant activities funded by other city departments were not reported, even if they were important components of the storm water program. Officials in the Milwaukee Department of Infrastructure Services and the Boston Water and Sewer Commission told us that other city departments perform and fund activities such as street sweeping and flood control. The costs of these activities are not reported as storm water program costs because the activities serve other purposes besides preventing storm water pollution. In addition, according to some city officials, these activities were in place before the permit was issued and, therefore, cannot be characterized solely as storm water costs. The cost of street sweeping can be significant—for fiscal year 1999, Baltimore City and Worcester, which did include street- sweeping costs in their storm water program’s cost estimate, stated that their street-sweeping expenses totaled about $9.5 million and $1.2 million, respectively. Similarly, Milwaukee did not report the cost of a significant project related to storm water runoff because it was mostly funded by the state of Wisconsin. An EPA official told us that the agency had not yet made a national effort to analyze the information that Phase I permittees submitted on the costs of their storm water programs. This official cited the inconsistent formats of the annual reports as a reason that the information was not readily available at the national level and also indicated that adequate staff are not available to analyze the data. In addition, other EPA officials informed us that the Office of Wastewater Management must divide its resources among a number of issues that will challenge the agency’s water program over the next decade. Several officials in the cities we visited said that their annual costs are likely to increase. A number of factors could affect the costs. For example, a Baltimore City official explained that the anticipated, future program costs depend on several factors, including (1) requirements in watershed- management plans currently being developed, (2) pollution-reduction goals the city will be required to achieve, (3) requirements of the state regulatory agency in future permits, and (4) requirements the city may have to meet if TMDLs or numeric effluent limits are incorporated into NPDES storm water permits. Other city officials also expressed concern about the extent to which TMDLs could affect their future costs. These city officials are concerned that when and if TMDLs are established, their future storm water permits may require that storm water runoff meet specific water quality standards. For example, Los Angeles County’s trash TMDL could potentially drive the county’s storm water management costs upward, and the county expects additional TMDLs to be imposed. On the other hand, Worcester officials estimated that their future storm water costs would be about the same as they were at the time of our review—about $4.5 million per year. In a separate analysis, EPA estimated in 1999 that it will cost Phase II municipalities about $848 million to $981 million per year (in 1998 dollars) to manage storm water runoff. Because Phase II permits have not been issued as of May 2001, we did not gather any cost information on them from these cities. The five cities we visited had not generally obtained federal funds for their storm water management efforts. They used local sources, including general revenues, bonds, revenue from specifically created storm water utilities, state grants, and inspection and permit fees. While several sections of the Clean Water Act provide funding that can be used for municipal storm water control, relatively few federal funds have been directed to these types of projects. The most significant source of funds is the state revolving loan funds administered by states. These revolving loan funds provide loans for eligible storm water control projects. In some cases, nonpoint source projects may also qualify for funding when storm water permits are not required or issued. However, municipal storm water management is generally a low priority in these programs. Specifically, in the year 2000, revolving fund loans were made in the “storm sewers” category in the amount of $38.76 million for 44 different projects. These funds represented less than 1 percent of the amounts loaned from these revolving funds that year. Activities eligible for revolving fund loans include constructing BMPs to control runoff, but support for ongoing operations and maintenance is not eligible. Revolving fund loans can also be used for eligible CSO control projects. In 2000, Clean Water State Revolving Fund Program loans were made in the “CSO Correction” category of a national EPA database in the amount of $411.3 million for 69 different projects and could have been used for CSO or sanitary sewer overflow projects. This amount represented about 9 percent of the funds loaned in 2000. According to EPA, the agency also issues grants to universities and other research institutions to help implement the storm water program. Some of these grants provide training and guidance to Phase I permittees on watershed protection and the proper selection of BMPs. Other sources of funding may be available to local governments beginning in 2002. In December 2000, the Congress authorized programs for fiscal years 2002 through 2004 to provide grants to local governments for (1) pilot projects for managing municipal CSOs, sanitary sewer overflows, and storm water discharges on a watershed basis and for testing BMPs and (2) controlling pollutants from MS4s to demonstrate and determine cost- effective, innovative technologies for reducing pollutants from storm water discharge. EPA’s proposed budget does not request funds for these programs. In addition, the Congress authorized programs for fiscal years 2002 and 2003 to provide grants to local governments for planning, designing, and constructing treatment works to intercept, transport, control, or treat municipal CSOs and sanitary sewer overflows. EPA’s proposed budget requested $450 million for this program. EPA, state, and municipal officials generally believe that the NPDES Storm Water Program will improve water quality. These officials believe that the program will result in more bodies of water that meet water quality standards, improved aesthetic conditions, reduced risk from bacterial contamination, and improvements attributable to the discovery and management of pollutants in storm water that otherwise would have gone unnoticed. EPA attempted to put a dollar value on these benefits in its benefit/cost analysis prepared for the Phase II storm water regulations, estimating that such benefits could range from $672 million to $1.1 billion per year (in 1998 dollars). However, little information is currently available on the benefits of the storm water program or its general effectiveness. There is no doubt that it will take time for the results of the Phase I program to be demonstrated. As EPA notes in its February 2000 report to the Congress, pollution control efforts under water quality management programs produce long-term changes, and the agency expects water quality improvements attributable to the Phase I program to become evident in the future, as the program matures. In this report, EPA concluded that the program has improved storm water management at the local level, improved water quality, and decreased pollutant loads in storm water. However, EPA relied on a survey of only nine Phase I cities in making these conclusions and, therefore, also reported that the agency could not provide national estimates on water quality protection and improvements generated by Phase I of the program. To evaluate the entire program, EPA would have to establish goals for the program that are based on its mission; obtain information about the program’s results; compare the results with the goals; and make changes to the program, if warranted, to get closer to achieving the agency’s goals. EPA and the states also have not taken advantage of information that is available to evaluate the program. Each city we visited was regularly monitoring its storm water to establish baseline information on pollutant levels and was reporting this information to EPA or the regulatory state agency each year. Although cities with Phase I permits are required to report on their storm water monitoring results and changes in water quality, overall, EPA and the states have not successfully developed measurable goals for the program or demonstrated its effectiveness through the review of municipal reports. An EPA official said that some states had requested funding to analyze program data because they did not have the resources to do so, and that EPA had provided the funding in a few cases. EPA also has not established any guidelines for how these data should be reported. Therefore, the reports may be as variable as the cost information we obtained in our five site visits. EPA has not yet taken any of these data-analysis steps because, according to EPA officials, other program challenges within the Office of Wastewater Management compete with storm water management efforts for priority. For example, EPA officials stressed that available resources within the office must address other significant wet-weather pollution problems, such as CSOs and sanitary sewer overflows, and nonpoint source pollution problems, such as agricultural practices, forestry, and mining. One agency official noted that the highest priority is addressing needs that the agency and local governments have identified for improving wastewater infrastructure, such as sewage treatment facilities. The program also has relatively few staff assigned—about five in the headquarters office and about 10 in the regional offices—for the municipal, industrial, and construction portions of the program. In a program plan recently prepared for the storm water program, EPA estimated that nine to 10 staff would be needed in EPA headquarters to evaluate the program and implement other program requirements. EPA officials described two efforts that may be the first steps in developing better information about the program. First, EPA intends to issue a grant to the University of Alabama in June 2001 to evaluate monitoring data submitted by a sample of municipalities with Phase I permits. This effort will (1) determine the different types of monitoring being conducted by Phase I municipalities, (2) assess water quality in and around permitted municipalities and determine any correlation between program implementation and impacts on water quality, and (3) recommend approaches for improving the effectiveness of municipal storm water monitoring programs. EPA expects the results of this study in 2003. Second, an EPA official stated that the agency would like to establish a system for analyzing program findings, incorporating necessary changes that are based on these findings, and evaluating the program’s effectiveness. The agency plans to implement a pilot project in 2001 in the agency’s Atlanta Region IV office for analyzing data reported in annual reports and developing key indicators for the program. If this project is successful and resources are available, the project could be expanded. EPA regards urban runoff as a significant threat to water quality across the nation and considers it to be one of the most significant reasons that water quality standards are not being met nationwide. Prompted by the Congress, EPA has responded with a variety of programs, including the NPDES Storm Water Program, which requires more than 1,000 local governments to implement storm water management programs. Those municipalities that are currently involved in Phase I of the program have been attempting to reduce pollutants in storm water runoff for several years. It is time to begin evaluating these efforts. However, EPA has not established measurable goals for this program. In addition, the agency has not attempted to evaluate the effectiveness of this program in reducing storm water pollution or to determine its cost. The agency attributes this problem to inconsistent data reporting from permitted municipalities, insufficient staff resources, and other competing priorities within the Office of Wastewater Management. Although Phase I municipalities report monitoring and cost data to EPA or state regulatory agencies annually, these agencies have not reviewed this information to determine whether it can be of use in determining the program’s overall effectiveness or cost. Our analysis shows that the reported cost information will be difficult to analyze unless EPA and its state partners set guidelines designed to elicit more standardized reporting. Better data on costs and program effectiveness are needed—especially in light of the Phase II program that will involve thousands more municipalities in 2003. EPA’s planned research grant to the University of Alabama and its pilot project in the agency’s Region IV to analyze data from annual reports and develop baseline indicators is a step in the right direction and could point the way for a more comprehensive approach. To determine the extent to which activities undertaken through the NPDES Storm Water Program are reducing pollutants in urban runoff and improving water quality, and the costs of this program to local governments, we recommend that the Administrator, EPA, direct the Assistant Administrator for the Office of Water to establish measurable goals for the program; establish guidelines for obtaining consistent and reliable data from local governments with Phase I permits, including data on the effects of the program and the costs to these governments; review the data submitted by these permittees to determine whether program goals are being met and to identify the costs of the program; and assess whether the agency has allocated sufficient resources to oversee and monitor the program. We provided a draft of this report to EPA and DOT for their review and comment. EPA generally agreed with the report and with the recommendation, although it did not explicitly comment on all parts of it. (EPA’s comments appear in app. VI.) In response to our recommendation that EPA set measurable goals for the storm water program, EPA stated that under the second phase of the program, local governments will establish their own goals. Although this is an important activity, EPA will have difficulty evaluating the program’s effectiveness at a national level without setting goals that reflect the program’s mission of improving water quality. The agency (1) agreed that it should establish guidelines for obtaining consistent and reliable data from local governments about their programs and (2) plans to award grants to two universities for reviews of monitoring data reported by local governments. EPA did not comment on whether local governments should report on the costs of their programs. EPA also agreed that it and its state partners should review data reported by local governments to determine whether the program’s goals are being met. In April 2001, EPA officials told us that the agency planned to undertake a project in the Region IV (Atlanta) office to evaluate the methods local governments are using to control storm water. EPA’s letter indicates that the agency now plans to implement this project in three regional offices and 10 states. EPA did not comment on the part of our recommendation that the agency review the level of resources devoted to overseeing and monitoring the program. EPA also provided technical comments that we incorporated where appropriate. DOT generally agreed with the draft report and provided technical comments that we incorporated where appropriate. In particular, DOT suggested that we revise several references in the draft report to paved surface area and its relationship to increases in urban runoff, to emphasize that impervious surfaces, of which paved surfaces are a significant subset, cause increases in runoff. We revised the language in these places. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this report. At that time, we will send copies of this report to the Administrator, Environmental Protection Agency, and the Secretary of Transportation. We will make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-2834. Key contributors to this report are listed in appendix VII. Baltimore City’s municipal separate storm sewer system (MS4) is regulated by the Maryland Department of Environment (MDE) and, according to a city official, services the entire city. The city is currently implementing its second, 5-year National Pollutant Discharge Elimination System (NPDES) permit, issued on February 8, 1999. Before obtaining the first NPDES storm water permit in 1993, Baltimore City addressed the adverse affects of storm water runoff by implementing Maryland’s Storm Water Management Program and Erosion and Sediment Control Program. According to the 2000 census, Baltimore City’s population is about 651,000. Baltimore City’s urban runoff discharges to four major areas—Gwynns Falls, Jones Falls, Herring Run, and the Patapsco River—and then ultimately to the Chesapeake Bay. In 1990, the Environmental Protection Agency’s (EPA) 319(a) report implicated urban runoff as the main source of pollution in these waters. Moreover, Baltimore City was one of the areas studied in EPA’s Nationwide Urban Runoff Program in the 1980s. This study reported that urban runoff contributed over 60 percent of the total nitrogen, phosphorus, and organic carbon; over 70 percent of the chemical oxygen demand; and over 80 percent of the total suspended solids, lead, and zinc in local water bodies. An MDE official told us that nutrients, zinc, and suspended solids are among the constituents most commonly found in urban runoff, but the quantitative contribution to water quality impairment in the state’s waters was not known. Also, in 1996, the Chesapeake Executive Council designated the Baltimore Harbor as one of three toxic regions of concern in the Chesapeake Bay. The harbor suffers from sediment contaminated by banned substances (such as the termiticide chlordane) and contaminants currently being released (such as metals and organics). Furthermore, according to the Chesapeake Bay Program Office, data collected from Phase I permittees indicate that storm water runoff can be a significant source of metals and organics in the harbor. A Baltimore City official told us that some portions of Maryland’s waters are impaired because of unacceptable levels of nutrients, metals, suspended sediments, and chlordane. Moreover, this official noted that the state does not consider data that municipalities collect under their NPDES storm water permits during the 303(d) listing process. Therefore, he believes that streams in Maryland are much more impaired than indicated by the listing process. Like other NPDES storm water permit holders, Baltimore City uses a variety of best management practices (BMP) to reduce the amount of pollutants in runoff to the maximum extent practicable. These BMPs include detention ponds, shallow marshes (which use the biological and naturally occurring chemical processes in water and plants to remove pollutants), sand filter devices, public education programs, and the identification of illicit discharges to the MS4 system. Furthermore, Baltimore City participates in Maryland’s effort to reduce nutrient levels in the Chesapeake Bay. Refer to the section of this report describing local government efforts to manage storm water for details concerning this nutrient-reduction goal. One other BMP includes the following: Baltimore City has incorporated the 2000 Maryland Storm Water Design Manual’s management policies, principles, methods, and practices into its current NPDES storm water discharge permit. The purpose of the design manual is to (1) protect the waters of the state from the adverse effects of urban storm water runoff; (2) provide design guidance on the most effective structural and nonstructural BMPs for development sites; and (3) improve the quality of BMPs that are constructed in the state, with particular attention to their performance, longevity, safety, ease of maintenance, community acceptance, and environmental benefit. We were not able to obtain comprehensive information on the total cost to Baltimore City of managing storm water. Therefore, we do not present that information here. Baltimore City funds its storm water management control efforts with city water and sewer user fees and with state funds. The Boston Water and Sewer Commission received a NPDES storm water permit in October 1999. The commission is a separate entity from the city of Boston and, therefore, does not manage some storm water controls that are common in Phase I permits, such as street sweeping, winter deicing, and many of the urban runoff controls required for new developments. Boston has combined sewer systems as well as separate sanitary sewers and storm drains. The commission maintains 206 storm water outfalls and serves approximately 33 percent of the city through its separate MS4 system. In addition to the resident population of about 589,000, this system also almost daily serves 340,000 commuting workers; 70,000 shoppers, tourists, and business people; and 75,000 commuting students. The commission’s sanitary and combined flows are transported to the Massachusetts Water Resources Authority at Deer Island. The commission is also the permittee for EPA’s Combined Sewer Overflow Program. The commission considers the identification and elimination of illegal sanitary sewer connections as the most effective means of improving water quality and protecting public health. It is also concerned with the washoff of animal wastes from residential and open land, which is another major contributor to the impairment of water quality because it can cause an increase in coliform levels in the storm water discharges to the receiving waters. The commission has contracted for various studies to determine the impact of storm water runoff. The following two studies identified sources of bacterial contamination and characterized the quality of storm water discharged from different types of land uses. The studies included metering storm water flows, collecting and analyzing the storm water and receiving water quality samples, and identifying and remediating illegal sewer connections. Observations from the studies include the following: A 1996 study determined that pet waste, rather than sanitary sewage, was a key contributor of bacteria to the storm drain system that had possibly led to beach closings in the area. A 1998 study identified several illegal connections to the storm drain system. Furthermore, the study showed that deicing and sanding efforts resulted in levels of sodium, chloride, total dissolved solids, and cyanide that exceeded EPA’s acute (high dose) toxicity levels. To meet the NPDES permit’s requirements, the commission, like other permittees, continued BMPs, such as identifying illegal connections, and implemented new BMPs aimed at preventing the discharge of pollutants to storm drains and receiving waters. Refer to the section of this report describing local government efforts to manage storm water for details describing the commission’s citywide catch-basin inspection cleaning and preventative maintenance program. Other efforts include the following: The commission has placed particle separators, which remove oil, grease, and sediments from storm water flows, throughout the city. The commission requires particle separators to be installed by developers on all newly constructed storm drains that serve outdoor parking areas. Fuel-dispensing areas not covered by a canopy or other type of roof enclosure must also have a particle separator. The commission requires developers to consider on-site retention of storm water for all new projects, wherever feasible. On-site retention aids in controlling the rate, volume, and quality of storm water discharged to the commission’s storm drainage system. We were not able to obtain comprehensive information on the total cost to the commission of managing storm water because the commission does not separate the cost of its storm water program from the cost of its sewer operations. Therefore, we do not present that information here. The commission funds its storm water management control efforts primarily with city water and sewer user fees and bond proceeds. Under the NPDES Storm Water Program, the Los Angeles Regional Water Quality Control Board issues 5-year permits to Los Angeles County for its municipal storm water program. The Los Angeles County permit, issued in July 1996, is the county’s second storm water permit. This permit includes Los Angeles County as the principal permittee and 85 cities as permittees. According to the 2000 census, Los Angeles County’s population is about 9.5 million. The effects of urban runoff on the ocean are of particular concern in southern California. Contaminated sediments, impaired natural resources, and potential human illness could threaten the county’s tourism economy, estimated to be about $2 billion a year. The following three studies have shown that urban runoff can pose health risks to swimmers near storm drains and contribute toxic metals to receiving water sediments: The Santa Monica Bay Restoration Project conducted a study to assess the possible adverse health effects of swimming in waters contaminated by urban runoff. This study revealed that there is an increased risk of illness associated with swimming near flowing storm drain outlets and an increased risk of illness associated with swimming in areas with high concentrations of bacteria indicators. Furthermore, illnesses were reported more frequently on days when the samples were positive for enteric viruses. Refer to the section of this report describing the effects of runoff on aquatic life and human health for more details. Τhe Southern California Coastal Water Research Project coordinated a study that assessed microbiological water quality and found that the majority of shoreline waters exceeded water quality standards during wet-weather conditions. Furthermore, the ocean waters near storm water outlets demonstrated the worst water quality regardless of the weather. The Southern California Coastal Water Research Project also compared the runoff from an urban area and a nonurban area in the Santa Monica Bay Watershed. The results of the study indicated that storm water plumes extended up to several miles offshore and persisted for a few days. Furthermore, the runoff from the urban area proved to be toxic to sea urchin fertilization, and dissolved zinc and copper were determined to be contributors to the toxicity. The study also found that in urban areas, sediments offshore generally had higher concentrations of contaminants such as lead and zinc. As in the other sites we visited, the county is managing its runoff through the use of conventional BMPs. These BMPs include the elimination of illicit connections and discharges to the storm sewer system, construction control measures, routine inspections, staff training, pollution prevention plans for public vehicle maintenance and material storage facilities, sweeping and cleaning public parking facilities, street sweeping, catch- basin cleaning, and public education. The Los Angeles Regional Water Quality Control Board recently adopted a Total Maximum Daily Load (TMDL) Program to reduce trash loads to the Los Angeles River. As a result, the county is exploring a number of trash reduction BMPs, which are discussed in the section of this report describing local government efforts to manage storm water. Table 3 indicates that the county and the other permittees have allocated significant funding for storm water management activities over the years. For example, for fiscal year 1999, projected funding for storm water management activities for the county and the other permittees amounted to over $134 million. The largest projections for both went toward public agency activities. For example, during fiscal year 1999, the principal permittee and the permittees together projected almost 67 percent of storm water management funds to public agency activities. The activities in this program include staff training, inspections of construction projects, street sweeping, and catch-basin cleaning. As shown in table 3, the county maintains primary responsibility for monitoring activities, having projected over $2 million for storm water monitoring activities in fiscal year 1997, almost $2 million in fiscal year 1998, and over $1.5 million in fiscal year 1999. Conversely, the permittees’ projected funding levels for monitoring activities amounted to only $619,000 in fiscal year 1997, $729,000 in fiscal year 1998, and $737,000 in fiscal year 1999. According to an official with the Los Angeles Regional Water Quality Control Board, the County has consistently maintained primary responsibility for monitoring activities required under the permit. The primary source of funds for the county’s storm water program is flood control assessments collected throughout the district. Although the county has not applied for any state revolving funds, it has applied for and received approval for federal funds through the Transportation Equity Act for the 21st Century (TEA-21) for a pilot study of an engineering device that would remove trash from storm water. Additionally, the county has received partial funding through Proposition A of the Safe Neighborhood Parks of 1992 and 1996 for two Vortex Separation Systems—a Continuous Deflective Separation unit and a Stormceptor unit. Additionally, the county received grant money from the Metropolitan Transit Authority, which partially funded catch-basin screens, a Continuous Deflective Separation unit, and 120 catch-basin inserts. The Wisconsin Department of Natural Resources (WDNR) has the authority to regulate the discharge of storm water from municipalities, construction sites, and industries under Natural Resources Code 216. This rule identifies Wisconsin municipalities that are required to obtain a storm water discharge permit under the Wisconsin Pollutant Discharge Elimination System (WPDES). Milwaukee completed its application process in 1994, and WDNR issued a WPDES permit to the city in October 1994. This was the first municipal storm water permit issued to a municipality in EPA’s Region 5 covering the midwest. In July 2000, WDNR reissued Milwaukee’s storm water permit. According to the 2000 census, Milwaukee’s population is about 597,000. Milwaukee has a combined sewer system as well as a separate sanitary sewer system. The Milwaukee Metropolitan Sewerage District implemented a rehabilitation program that cost over $2 billion to reduce the number of combined sewer overflow (CSO) events each year. The rehabilitation program involved the construction of deep tunnels to store untreated wastewater and rainwater for later treatment at a wastewater treatment plant. Since 1996, the deep tunnels have significantly reduced the number of overflow events from an average of 50 to 60 per year before the construction to an average of two per year afterwards. Urban runoff has been identified as a leading source of pollution to the Milwaukee River basin’s streams, lakes, and wetlands and the Milwaukee River estuary. To address pollution from urban runoff, WDNR issues storm water permits to municipalities with MS4s serving areas with populations of 100,000 or more, municipalities in Great Lakes “areas of concern” where water quality has been identified as a serious problem, municipalities with populations of 50,000 or more that are located in priority watershed planning areas, and designated municipalities that contribute to the violation of a water-quality standard or are significant contributors of pollutants to state waters. In addition to BMPs such as the elimination of illicit connections and discharges to the storm sewer system, the reduction of pollutants in storm water runoff from construction sites, public education, catch-basin cleaning, street sweeping, and the use of detention basins, Milwaukee has explored the use of innovative BMPs. Refer to the section of this report describing local government efforts to manage storm water for more details about an educational campaign directed at a specific watershed. Additional BMPs include the following: An innovative storm water control device was installed in a parking lot at a heavily used municipal public works yard that was found to discharge significant amounts of storm water pollutants. Termed the Multi-Chambered Treatment Tank (MCTT), this device is suitable for areas with limited space, cleans up polluted runoff close to its source, removes pollutants that are not susceptible to other treatment methods, and is hidden from view. The MCTT consists of a catch basin, a settling chamber, and a filter. Although the results of the monitoring studies have revealed that the device has a positive effect on water quality, officials with the Department of Public Works explained that it is cost- prohibitive and suitable only for sites with limited space. The permittee has also been working with WDNR, the Department of Transportation, the U.S. Geological Survey, and a neighborhood association in a joint effort to develop a storm water monitoring assessment program consisting of two innovative storm water treatment devices. One device removes grit, contaminated sediments, heavy metals, and oily floating pollutants from surface runoff. The other device removes a broad range of pollutants from runoff, such as bacteria, heavy metals, nutrients, petroleum hydrocarbons, and suspended solids. The devices are to be installed along a new reach of the Milwaukee Riverwalk through the third ward of Milwaukee. Reliable data on the total cost to manage storm water in Milwaukee were not available and cannot be presented here because certain activities are not reported as program costs in the city’s annual report. These activities include street sweeping; leaf collection; catch-basin and inlet cleaning; maintenance of public boulevards, parks, and public green spaces; and the recycling of waste oil and antifreeze. Therefore, the program costs reflected in the annual report do not take into account many of the nonstructural BMPs employed by the city nor do the totals include activities funded through grants. The storm water management activities that were included in the city’s 2000 budget request were estimated to cost $460,000. Milwaukee’s storm water program is primarily funded through the city’s sewer maintenance fund. Unlike the general revenue account, which is based on property taxes, the sewer maintenance fund is based on water consumption. The city has also received supplemental funding from the Wisconsin Nonpoint Source Water Pollution Abatement Program in the form of WDNR grants. The city has received over $1 million since 1991 for a wide variety of storm water management activities. Worcester’s Department of Public Works (DPW) received a NPDES permit on November 1, 1998. The Sewer Operations Division, within the DPW, is directly responsible for operating and maintaining the city’s separate storm sewer system, along with the sanitary and combined sewer system. Since 1993, the Sewer Operations Division has had a full-time storm water coordinator, reflecting Worcester’s increased emphasis on meeting NPDES program requirements. Worcester has a population of about 173,000. Its water system covers an extensive area, including 371 miles of sanitary sewers, 340 miles of storm sewers, 56 miles of combined sewers, 27,000 manholes, over 14,000 catch basins, and 263 outfalls. Worcester’s separate storm drain systems consist of 93 main drainage areas covering approximately 6,680 acres. The constituents that are typically found in urban runoff in Worcester are the same as those normally found in urban runoff in older cities. Because virtually all of the paved surfaces in the Worcester area are devoted to the city’s transportation infrastructure, the constituents generated include automobile-related petroleum products, such as total petroleum hydrocarbons, oil and grease, along with total suspended solids. Also, coliform, silt, and sediment have been identified in the city’s runoff. Like other permittees, the DPW has implemented BMPs under the major areas of education outreach, pollution prevention and source controls, storm-drainage system maintenance, regulatory efforts, and storm-drainage system infrastructure. Additionally, to reduce storm water pollution, the DPW has retrofitted a number of twin manholes in the city as discussed below. BMPs that are specific to Worcester include the following: The DPW implemented a demonstration project to determine the effectiveness of an oil and grit separator installed on a street drain. The drain is a major surface sewer main that services approximately 226 acres of heavily urbanized area with a typical mix of residential, commercial, and industrial use. The drain discharges into Lake Quinsigamond, which is a large lake used for recreational purposes such as swimming and boating. In its April 2000 annual plan submitted to EPA, the DPW noted that because of drought conditions, it currently did not have sufficient sampling data to determine the effectiveness of the project. The DPW has embarked on a comprehensive program to minimize the possibility that sewage and storm water will be mixed in its twin invert manholes. Since the program began, the DPW has installed hold-down devices on over 1,680 of the approximately 2,580 twin invert manholes in the city. The DPW expects to continue the program until all of the manholes have been retrofitted. The DPW is also working closely with the Massachusetts Department of Environmental Protection in its ongoing tracking efforts to ensure that industries in Worcester are doing their part to reduce storm water pollution. To improve its storm-drainage infrastructure, the city has established a voluntary plan to reduce the number of unpaved private roads. The dirt from these roads, especially after rain storms, causes sediment to build up in the drainage system. The DPW has developed a plan to pave the streets at a lower grade than would be necessary to meet the legal requirements for a public street. Under this plan, residents would not have to pay the additional betterment taxes that are now required to cover the costs of sediment removal and less sediment would be transported in runoff. Since 1993, the DPW has allocated significant funding from the water and sewer utility fees it collects for controlling the effects of runoff, especially through catch-basin cleaning, street sweeping, and correcting illegal connections. For example, its fiscal year 1993 budget for storm water programs included about $1.6 million for specific programs and another $1 million for capital improvement programs, such as inflow/infiltration and flood control. The DPW also spent $500,000 to develop and submit its permit application. Furthermore, as shown in table 4, Worcester made extensive capital expenditures during fiscal years 1994 through 1999 on pertinent storm water projects to improve the quality of storm water runoff emanating from the city’s storm water sewer system. Furthermore, during fiscal year 1999, the DPW spent approximately another $2.1 million to operate and maintain storm water activities. Key expenditures included about $1.2 million for street sweeping, about $617,000 for catch-basin maintenance, $52,000 for root control, and another $48,000 for street paving. Also included was $40,000 per year for sampling five outfalls around the city three times per year as required by the permit. According to a DPW official, in previous fiscal years, the DPW funded the same or similar operation and maintenance activities to help control storm water runoff. As a result, the costs since 1994 were similar to those for 1999, except for annual adjustments for inflation. Therefore, the annual operation and maintenance expenditures ranged from about $1.7 million for 1994 to about $2.1 million for 1999. According to a DPW official, the department expects to spend from $3 million to $4.5 million annually over the next several years on storm water- related activities. The amount of the cost increase will depend on whether EPA asks the city to increase its spending. The DPW funds its storm water management controls effort from the water and sewer user fees it assesses to homes and businesses. In addition to those named above, Jennifer Clayborne, Richard LaMore, Sally Coburn, Elizabeth McNally, Charles Bausell, and Timothy Guinane made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system)
The Environmental Protection Agency (EPA) considers the contaminants in storm water runoff as a significant threat to water quality across the nation. Prompted by Congress, EPA has responded with various initiatives, including the National Pollutant Discharge Elimination System Storm Water Program, which requires more than 1,000 local governments to undertake storm water management programs. Those municipalities in Phase I of the program have been trying to reduce pollutants in storm water runoff for several years, and it is time to begin evaluating their efforts. EPA however, has not established measurable goals for this program, nor has it attempted to evaluate the program's effectiveness in reducing storm water pollution or to determine its cost. EPA attributes its inaction to inconsistent data reporting from municipalities, insufficient staff resources, and other competing priorities within the Office of Wastewater Management. Although municipalities report monitoring and cost data to EPA or state regulatory agencies annually, these agencies have not reviewed this information to determine whether it can be useful in determining the program's overall effectiveness or cost. GAO found that the reported cost information will be difficult to analyze unless EPA and its state partners set guidelines to elicit more standardized reporting. Better data on costs and program effectiveness are needed--especially in light of the Phase II program that will involve thousands more municipalities in 2003. EPA's planned research grant to the University of Alabama and its pilot project to analyze data from annual reports and develop baseline indicators is a step in the right direction and could point the way for a more comprehensive approach.
Personal services contracts are a type of contract in which the government exercises relatively continuous supervision and control over the individuals performing the work. FPDS-NG reports that the federal government obligated approximately $1.5 billion on personal services contracts from fiscal years 2011 through 2015. General guidance on personal services contracts is laid out in the Federal Acquisition Regulation (FAR). In addition, there are federal statutes giving specific authority to agencies to award personal services contracts. Agencies such as DOD and USAID also have developed supplemental regulations for approving, overseeing, and administering such contracts. Agencies may award personal services contracts for a variety of services under specific statutory authority. Some examples of services performed by personal services contractors include medical services and management support for agency operations such as disaster relief. Lastly, the Office of Federal Procurement Policy (OFPP) within the Office of Management and Budget has issued guidance on defining and managing the performance of inherently governmental and critical functions. FPDS-NG is a comprehensive, web-based tool for agencies to report contract transactions. It is a searchable database of contract information that provides a capability to examine data across government agencies and provides managers a mechanism for determining where contract dollars are being spent. The contracting officer who awards a contract has responsibility for the accuracy of the individual contract action information entered in FPDS-NG. Agencies are responsible for developing a process and monitoring results to ensure timely and accurate reporting of contractual transactions in FPDS-NG and are required to submit certifications about the accuracy of contract reporting to the General Services Administration (GSA). According to GSA, these certifications collectively demonstrate that the data in FPDS-NG currently have an overall accuracy rate of 95 percent. We previously have reported on some of the shortcomings of the FPDS-NG system and its predecessors. Nevertheless, we routinely use data from FPDS-NG, but only after determining, through various means, that the data we use are sufficiently reliable for our specific reporting purposes. Part 37 of the FAR prescribes policy and procedures specific to the acquisition and management of services by contract. Sections 37.103 and 37.104 specifically discuss contracting officer responsibilities and provide descriptive elements for assessing whether a proposed contract is a personal services contract. According to the FAR, the employer/employee relationship can occur either as a result of the contract’s terms or in the manner of administration of the contract. The FAR notes that each contract arrangement should be judged in the light of its own facts and circumstances, with the primary question being whether the contractor’s personnel are subject to the relatively continuous supervision and control of government personnel. The FAR enumerates the characteristics of personal services contracts: Performance on a government site, Principal tools and equipment furnished by the government, Services applied directly to the agency mission, Comparable services are performed in similar agencies using civil The need for the type of service can reasonably be expected to last more than 1 year, and The nature of the service or the way that it is performed reasonably requires government direction or supervision of the contractor’s employees to adequately protect the government’s interest, retain control of the function, or retain full responsibility for the function. Agencies also may have supplemental regulations to the FAR. Before awarding some personal services contracts, Department of Defense Federal Acquisition Regulation Supplement (DFARS) requires, for example, a determination that asserts, among other things, that a nonpersonal service contract—a contract not directly supervised by government employees—is not practicable and cites the relevant statutory authorities. The USAID Acquisition Regulation provides references to statutory authority and describes the kinds of tasks U. S. citizens may be assigned as personal services contractors, including some duties that might otherwise be assigned to direct-hire employees. Since 2008 for DOD and 2009 for civilian agencies, Congress has required agencies to prepare an annual inventory of contracted services, covering the preceding fiscal year. The inventories are to include a number of data elements for each entry, including a description of the services, the total dollar amount obligated, the number of contractor personnel expressed as full-time equivalents for direct labor, and whether the contract is a personal services contract. Agencies also are required to review their inventories to, among other things: ensure that each contract that is a personal services contract has been entered into, and is being performed, according to laws and regulations; ensure that the agency is not using contractor personnel to perform inherently governmental functions; and identify activities that should be considered for conversion to performance by federal employees. These inventories are intended, in part, to help provide better insight into the number of contractor full-time equivalents providing services and the functions they are performing, and determine whether any of these functions warrant conversion to performance by government employees. We have previously reported on challenges with developing and using the inventory of contracted services and have made recommendations for DOD to revise inventory guidance to improve the review of contract functions, approve a plan of action with milestones and time frames to establish a common data system to collect contractor manpower data, and designate a senior management official at the military departments to develop plans to use inventory data to inform management decisions. DOD concurred with GAO’s recommendations. In 2011, OFPP issued guidance, OFPP Policy Letter 11-01, on the performance of inherently governmental and critical functions. The guidance was intended to assist agencies in ensuring that only federal employees perform work that is inherently governmental. The guidance contained examples of the types of work that would be considered inherently governmental. Some examples include determination of budget policy, hiring decisions for federal employees, the direction and control of intelligence or counterintelligence operations, and administering contracts, among others. The FAR states that contracts shall not be used to perform inherently governmental functions, but the regulation provides that this prohibition does not apply to personal services contracts issued under statutory authority. We cannot confirm the extent that personal services contracts are awarded at DOD because we found discrepancies at two DOD agencies whose contracts we examined. Specifically, although FPDS-NG reports that DOD spent about $118 million on personal services contracts in fiscal year 2015, we found that personal services contract obligations from the Air Force and Army were overstated in the FPDS-NG data because they included obligations that were not for personal services contracts. In addition, we identified personal services contracts in the inventory of contracted services data for the Air Force, Army, and Navy that were not recorded as such in FPDS-NG. We did not identify similar issues at USAID, which reported spending more than $123 million on personal services contracts in fiscal year 2015. For both DOD and USAID, however, we observed that the extent to which personal services contracts are used may be undercounted since some contracts for nonpersonal services share many of the characteristics of personal services contracts and could, in fact, be administered as personal services contracts. We found that the extent to which personal services contracts are used by DOD may be over stated in FPDS-NG based on our review of selected files and interviews with contracting officials. Specifically, documentation in contract files for some Air Force and Army contracts did not support the classification as a personal services contract as reported in FPDS-NG. Contracting officers are tasked with ensuring the accuracy of the data captured in FPDS-NG, but, in total, 17 of 45 DOD contracts—more than one third—we reviewed were incorrectly coded. The results of our examination of the selected contracts for each agency follow: We found that 4 of the 15 contracts reviewed were incorrectly reported as personal services contracts in FPDS-NG. We confirmed this with Air Force officials. Documentation in the contract file for the 4 contracts indicated that the product service code was not correct. For example, one incorrectly coded Air Force contract was for the Air Force Tricare liaison to coordinate referrals and ensure that medical paperwork was provided to external providers for continuity of care, which, according to the contract’s performance work statement, did not involve the direct supervision or control by government staff—a defining feature of personal services contracts. The correctly coded contracts were all for medical personnel, such as dental assistants, nurses, and pharmacy technicians at various Air Force installations. We found that 13 of the 15 contracts we selected were incorrectly coded in FPDS-NG as personal services contracts. Of the 13 incorrectly coded contract actions we reviewed, 2 were task orders for billeting services. An Army official stated that the product service code cited in the base contract was incorrect at the time of the initial award and was then applied to subsequent task orders. Eleven other contracts did not constitute personal services contracts based on our review of the statement of work. For example, in one incorrectly coded Army contract, the contractor was required to present six separate seminars, but was not subject to the relatively continuous supervision and control of government staff, a defining characteristic of personal services contracts. Army officials confirmed that the original product service codes recorded in FPDS-NG were incorrect for these 13 contracts. The two contracts correctly coded were for engineering services in Iraq. We found that 15 of 15 Navy contracts in our sample reported as personal services contracts in FPDS-NG were all correctly coded and the designation was supported in the contract file. All of the contracts we reviewed were for health care-related services at U.S. Naval Hospital, Guam, including pharmacy technicians and a registered nurse. We found that 15 of 15 contracts selected for review based on the product service code reported in FPDS-NG had documentation in the contract file to support the personal services contract designation. Agency officials stated that the distinction between personal services contracts and nonpersonal services contracts is sometimes difficult to determine, and that making a decision that a particular contract is a personal service contract is subjective and depends on the interpretation of tasks and supervision. According to section 37.103 of the FAR, the contracting officer is responsible for ensuring that a proposed contract for services is proper. For personal services contracts, the contracting officer must document the file with a statement of the facts and rationale supporting a conclusion that the contract is specifically authorized by statute. Further, according to Standards for Internal Control in the Federal Government, management is responsible for the design and execution of appropriate types of control activities that ensure the proper execution of transactions. This includes appropriate documentation of transactions to ensure that reliable information is available for making decisions and the proper supervision of contractors. The second source we used for information about DOD’s personal services contracts, DOD’s annual inventories of contracted services, differed from FPDS-NG in the reporting of personal services contracts information. Figure 1 depicts the extent to which DOD personal services contracts appeared in both the inventory of contracted services and in FPDS-NG. The FPDS-NG data for the Air Force, Army, and Navy differed substantially from the inventory, as depicted above in figure 1. For each of the military departments, the inventory of contracted services contained references to personal services contracts not recorded as such in FPDS- NG. For both the Air Force and the Army, there was little commonality between the personal services contracts identified in FPDS-NG and the inventories. Although all of the Navy’s personal services contracts that were identified in FPDS-NG were included in the Navy’s inventory of contracted services, the Navy’s inventory included 14 additional personal services contracts not identified in FPDS-NG. The discrepancies between FPDS-NG and the inventory of contracted services could be explained by a variety of circumstances. We have reported and agency officials agreed that the inventories are developed in different ways. For example, in the case of the Navy, officials stated that they develop the inventory using information from both FPDS-NG and the Enterprise-wide Contractor Manpower Reporting Application, a system used by contractors to self-report information. Officials stated that identifying whether a contract was for personal services was one of the data fields to be completed by the contractor, but contractors may not be knowledgeable about the characteristics of personal services contracts. We did not find discrepancies between the personal services contracts in FPDS-NG and the USAID’s inventory. USAID uses data from FPDS-NG to develop its annual inventory of contracted services. According to Standards for Internal Control in the Federal Government, it is the responsibility of management to ensure that reliable information is available for making decisions and the proper supervision of contractors. An accurate account of the use of personal services contracts assists agencies to properly understand manpower requirements, evaluate risks, and determine if adjustments are needed. The inconsistency in the reported data from the two sources hinders the ability of agency managers to understand the extent that they are using personal services contracts and how they are used. Apart from the inaccuracies and differences in data reported in FPDS and the inventories of contracted services, it is also possible that personal services contracts could be undercounted because nonpersonal services contracts could be administered in a manner that results in their actually being personal services contracts and potentially unauthorized personal services contracts. In our sample of 40 contracts coded as engineering and technical services, or other professional services contracts-— nonpersonal services—we did not assess the contract administration and, therefore, did not identify examples of where a contract was a personal services contract due to being administered in a way that resulted in direct supervision of a contractor by government personnel. However, we note that relatively small changes in the tasks or supervision could result in some of the nonpersonal services contracts we reviewed being administered as personal services contracts. For example, many of the contracts involved the contractor performing critical tasks, with performance occurring in a government workspace. While the statement of work required the contractor (not the government) to provide supervision, given the critical nature of the tasks performed and co- location of contractors and government personnel, there is an opportunity for government officials to exercise continuous supervision and control over the contractor so that the contract would become a personal services contract. Contracting officials for these contracts emphasized that these contracts were not personal services contracts since they did not entail the relatively constant supervision of the contractor staff by government officials. The officials acknowledged, however, that just a slight change in the administration of these contracts could convert them into personal services contracts. Officials also stated that, in some cases, performance of selected tasks by contractor staff could be an area where it would be challenging to say whether a particular activity constituted personal services or not. In our review of the 40 nonpersonal services contracts awarded by the Air Force, Army, Navy, and USAID in fiscal year 2014, we found that a number of contracts had several characteristics common to personal services contracts based on documentation in the contract file and the FAR’s descriptive elements. To illustrate, the following four contracts, one from each agency, demonstrate the similarities to personal services contracts. For each contract below either the contract or discussions with contracting officials specified that the Contracting Officer’s Representative (COR) served as the liaison between the government and the contractor, but other aspects of the contract meet many of the characteristics of personal services contracts as presented in the sidebar. An Air Force contract for engineering cost support services includes specific tasks such as preparing program office estimates, cost benefit analyses, and sufficiency reviews of prime contractor estimates, and evaluating costs. The contractor acts as a liaison between the program office and auditors from agencies such as Air Force Audit Agency, DOD’s Office of the Inspector General, and the Government Accountability Office. Other duties entail preparing a monthly acquisition report, program management review, budget management reviews, and spring and fall program reviews.  Principal tools and equipment furnished  The nature of the service or the way that it is performed reasonably requires, either directly or indirectly, Government direction or supervision of the contract employees to adequately protect the government’s interest, retain control of the function, or retain full personal responsibility for the function that is supported in a duly authorized federal officer or employee. An Army contract for support to Army Comprehensive Soldier and Family Fitness Training Centers specifies various tasks. One task identified an operations manager serving as the co-chair /member of the Walter Reed Army Institute of Research and as a member of the Army Fit Content Review Board. A second task identified includes the operation manager facilitating external research projects from the initial planning to implementation. A third task is for managing multiple facets of curriculum development and review. A fourth task is for a public affairs specialist to be responsible for planning, developing and executing strategic public affairs programs. A Navy contract to provide engineering and technical services for control systems and information systems required life-cycle support to software systems and major acquisition programs and support of Navy policies for acquisition of software intensive systems, including preparing test plans and participation in an executive steering group. A USAID contract to provide surge services for administrative functions such as the development of policy in the areas of event management, meeting and retreat facilitation, curriculum development, project design, and program and evaluation to support USAID’s mission. USAID and DOD have multiple authorities available for awarding personal services contracts. However, contract files at USAID did not cite the correct authority for the 15 contracts we reviewed. Additionally, USAID and DOD personal services contracts are used to support differing missions and entail different kinds of tasks. USAID has permanent authority to award personal services contracts under the Foreign Assistance Act of 1961 as amended. USAID also received authority in its fiscal year 2014 appropriation for some personal services contracts. For the 15 domestic USAID personal services contracts we reviewed, the authority cited in the contract file was a provision of the Foreign Assistance Act of 1961 that only authorizes personal services contracts abroad—outside of the United States—and an executive order. USAID officials acknowledged that the authority cited in the contracts was not the relevant authority. However, they stated that other authority pertaining to disaster relief in the Foreign Assistance Act of 1961 authorized the use of the domestic personal services contracts we reviewed. We did not find evidence of the correct authority documented in the file as required under the FAR. USAID acknowledged these documentation errors during the course of our review, and shared steps it had taken to revise its personal services contracts documentation. For example, USAID had revised its cover sheet for personal services contracts listing the possible authorities with a check box to indicate the authority relevant to that contract. However, USAID had not yet developed a process to determine whether the availability of the cover sheet will ensure that contracting officials cite the specific and correct authority. DOD’s statutory authorities for the use of personal services contracts include personal services for health care among others. DOD contracts cited statutory authority or the DFARS which, in turn, had a reference to the relevant statutory authority. Table 1 shows the authority cited for the DOD personal services contracts we reviewed. USAID’s personal services contracts that we reviewed cover a broad range of activities including program management, security analysis, and logistics, among others. In contrast, the majority of DOD’s personal services contracts that we reviewed are more narrowly focused on medical personnel. Another difference between USAID and DOD is the use of personal services contracts to conduct inherently governmental tasks. USAID’s supplemental regulation stipulates that personal services contractors can perform any duty a government employee might perform with few exceptions. According to DOD officials, it is not DOD’s practice to assign personal services contractors to perform inherently governmental tasks. As explained in USAID’s supplemental regulation, USAID’s personal services contractors who are U.S. citizens may be delegated or assigned any authority, duty, or responsibility that direct hire employees might have, with some exceptions, such as acting as a contracting officer. Inherently governmental tasks are those that would ordinarily only be performed by government employees such as making decisions about the priorities for budget requests, direction of intelligence operations, or awarding contracts, and examples of such tasks are laid out in the FAR and OFPP Policy Letter 11-01. The FAR’s general prohibition on the use of contractors to perform inherently governmental tasks does not apply to personal services contracts issued under statutory authority. USAID officials confirmed the tasks in some contracts we reviewed include inherently governmental tasks, as illustrated in two examples below. Security Analyst: This contractor is responsible for a variety of tasks including analyzing large volumes of security data and reports to make decisions or recommendations shaping agency programs. In addition, the contractor develops strategies for major areas of uncertainty in domestic and international political, social, or economic policies, trends, or situations that have potentially significant repercussions to the agency. The contractor develops the organization’s position on controversial or disputed issues. These tasks are considered inherently governmental, according to the FAR and OFPP Policy Letter 11-01. Senior Program Manager: This contractor is responsible for a variety of tasks including performing complex country analysis and program design to develop existing and future programs and strategies in high priority countries. In addition, the contractor manages or participates in the selection of grantees, contractors, and other personal services contractors. These tasks are considered inherently governmental, according to the FAR and OFPP Policy Letter 11-01. The majority of DOD’s personal services contracts we reviewed were awarded to obtain medical services from practitioners such as doctors, nurses, and pharmacists. For example, for the Navy, all 15 personal services contracts were for medical services. This was also the case for 11 contracts from the Air Force. The Army’s personal services contracts in our sample were for engineering services abroad. Agencies need accurate information about their personal services contracts in order to ensure that government supervision of the work is appropriate. Without such information agencies do not have information useful for managing their programs. The Air Force and Army had significant errors in reporting the use of personal contracts and USAID consistently cited the incorrect authority for awarding the personal services contracts we reviewed. Therefore, there is room for improving procedures to help ensure accurate information is recorded. USAID has taken initial steps to revise its documentation but has not yet developed a process to determine whether the steps taken will result in increased accuracy. Personal services contracts are important to understand and track because the contractors are directly supervised by government personnel much as government employees would be. Because of their organic relationship to the work of government, it is incumbent on government agencies to have credible, accurate information about the number of these contracts and the authorities under which they are awarded. The absence of such reliable and credible information hinders the ability of government managers to determine if there are sufficient government personnel to carry out inherently governmental work and to properly oversee the work of contractors to ensure that the government remains responsible for the execution of approved government functions and for managing the agency’s work. To ensure accurate reporting of personal services contracts, we make the following two recommendations. The Secretary of Defense direct the Secretaries of the Air Force and the Army take steps to ensure the accurate recording of personal services contracts in the Federal Procurement Data System-Next Generation. The Administrator, United States Agency for International Development implement periodic reviews of selected personal services contracts to ensure the effectiveness of steps taken to assist contracting officers to cite the correct statutory authority for personal services contracts. We provided a draft of this report to DOD and USAID for their review and comment. In written comments reprinted in appendixes II and III, both DOD and USAID concurred with our recommendations and described the actions they plan to take. DOD stated that the Director, Defense Procurement and Acquisition Policy, will issue a memo to the Army and Air Force Senior Procurement Executives directing them to take appropriate steps to ensure the accurate recording of personal services contracts. USAID stated that the Agency had revised and distributed a coversheet to a standard form which they believed would result in greater accuracy in citing the authorization for domestic personal services contracts. Consistent with our recommendation, USAID has revised its checklist used for reviewing and validating key acquisition functions. The agency will use the checklist in its annual procurement systems reviews to verify that contracting officials cite the correct authority. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Administrator, United States Agency for International Development, the Undersecretary of Defense for Acquisition, Technology, and Logistics, the Undersecretary of Defense, Personnel and Readiness and other interested parties. In addition the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or at woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report examines (1) the extent to which selected agencies award personal services contracts; and (2) how those agencies use personal services contracts. To identify the extent to which the government reports awarding personal service contracts, we analyzed data from the Federal Procurement Data System—Next Generation (FPDS-NG). We selected contracts, excluding modifications, identified as personal services contracts based on the product service code of R 497—a product service code reserved for personal services contracts—for contracts awarded in fiscal years 2011 through 2015. Fiscal year 2015 was the latest year with certified FPDS- NG data at the time we started our review. We identified 11 agencies or departments that reported obligations for personal services contracts. We analyzed the data and identified the Department of Defense (DOD) and the United States Agency for International Development (USAID) as the two agencies that reported the highest obligations for personal services contracts in fiscal years 2011 through 2015. Although we identified inaccuracies in some of the data in FPDS-NG, as discussed in this report, we discussed the data and its limitations with agency officials and determined that the data from FPDS-NG were sufficiently reliable for purposes of selecting the agencies with the highest obligations on personal services contracts and obtaining a sample of contracts. We also identified the four agencies—the Air Force, Army, and Navy within the Department of Defense (DOD), and the United States Agency for International Development (USAID)—with the highest obligations for personal services contracts using FPDS-NG data. These agencies account for nearly 60 percent of the spending on such contracts in fiscal year 2014. We reviewed a nongeneralizable random sample of 60 contracts coded as personal services contracts in FPDS-NG, 15 contracts from each agency (Air Force, Army, Navy, and USAID). The 60 contract random nongeneralizable sample was drawn from all contracts in fiscal year 2014 that reported obligations for personal services contracts equal to or greater than $10,000. We reviewed the files to determine the specific statutory authority cited for awarding the personal services contract, the tasks performed by the contractor, supervision provided, and the duration of the contract including options. We also obtained policy documents and supplemental regulations from the agencies detailing agency responsibilities with respect to personal services contracts and interviewed agency officials. We compared the data reported in FPDS- NG, such as the contract number and award value, to information in the selected contract files and determined that the FPDS-NG data were sufficiently reliable for the purposes of selecting our sample. We determined that a number of contracts identified by the Army and the Air Force as personal services contracts were miscoded as personal services contracts based on documentation in the contract file and discussions with agency officials. To obtain additional information on the extent to which agencies use personal services contracts, we also examined the data on personal services contracts from the publically available inventories of contracted services for the Air Force, Army, Navy and USAID for fiscal year 2014. Fiscal year 2014 was the latest year with certified inventory data, at the time of our review. These inventories are congressionally required compilations of services contracts intended to provide insight into the kinds of services purchased and the number of contractor personnel involved. We discussed the preparation of the inventories with agency officials and reviewed our prior reports on inventories. However, examination of the inventory of contracted services data for the Air Force, Army and Navy did not resolve discrepancies we found between DOD’s FPDS-NG data and the inventory of contracted services data. Based on our review of the FPDS-NG data, reviews of the selected service contract inventory data, selected contract files, and interviews with DOD and USAID officials, we determined that the FPDS-NG data are not sufficiently reliable for comparing obligations from year to year for personal services contracts or for determining the extent to which DOD awarded personal services contracts. We present data on obligations for illustrative purposes only. To determine how DOD and USAID use personal services contracts, we reviewed contract files to determine the authority cited for awarding the contracts and analyzed the statements of work, which define the kinds of services required under the contracts. To further explore the differences in how these agencies use personal services contracts and other types of service contracts, we also reviewed a different nongeneralizable random sample of 40 contracts that were coded as engineering and technical services, or other professional services contracts awarded by the Air Force, Army, Navy, and USAID in fiscal year 2014 (10 contracts from each agency). We selected these categories of services because they are similar to the types of services performed by personal services contractors and constituted a majority of the services contracts awarded by DOD and USAID. We did not review contractor performance or contract administration for this report. We compared the data reported in FPDS-NG, such as the contract number and award value, to information in the selected contract files and determined that the FPDS-NG data were sufficiently reliable for the purposes of selecting our sample. The sample of contracts both personal and nonpersonal included in our review is not generalizable to a larger universe, but is designed to provide illustrative examples of characteristics and use of personal service contracts at the selected agencies and components, and for comparison of characteristics of personal and nonpersonal services contract awards. We reviewed the Federal Acquisition Regulation (FAR), obtained supplemental regulations and policy documents from the Office of Federal Procurement Policy (OFPP) within the Office of Management and Budget, and from the agencies we reviewed that detailed agency responsibilities with respect to personal services contracts. We interviewed agency personnel concerning their responsibilities for awarding personal services contracts, for preparing the data entered in FPDS-NG, for preparing the annual inventory of contracted services, and reviewing the contracts subsequent to inventory preparation. To gain further insight into FPDS-NG, agency-specific service contract inventories, and contract files, we interviewed officials from the Air Force, Army, Navy, USAID, and the Office of the Secretary of Defense (OSD), including OSD General Counsel and OSD’s Total Force Manpower and Resources Directorate. We also interviewed officials from the Office of Management and Budget’s OFPP regarding the government-wide use of personal services contracts. We conducted this performance audit from February 2016 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Penny Berrier, Assistant Director; John Beauchamp; Stephanie Gustafson; Kristine Hassinger; Julia Kennon; Carol Mebane; Jean McSween; Kate Pfeiffer; Roxanna Sun; and Abby Volk made significant contributions to this review.
A personal services contract is one that makes contractor personnel appear to be government employees. These contracts must be authorized by federal law. According to FPDS-NG, the government reported obligating about $1.5 billion on personal services contracts in fiscal years 2011 through 2015. GAO was asked to examine the federal government's use of personal services contracts. This report discusses (1) the extent to which selected federal agencies award personal services contracts, and (2) how those agencies use them. GAO identified the four agencies spending the most on personal services contracts—the Air Force, Army, Navy, and USAID—as reported in FPDS-NG. These agencies account for about 60 percent of total spending on these contracts. GAO also reviewed the service contract inventories these agencies prepared for fiscal year 2014, the latest year available at the time of this review. GAO reviewed the files for a nongeneralizable sample of 60 personal (15 at each agency) and 40 nonpersonal services contracts (10 at each agency) and interviewed agency officials. GAO did not review the administration of the contracts. The United States Agency for International Development (USAID) spent more than $123 million on personal services contracts in fiscal year 2015, according to the Federal Procurement Data System-Next Generation (FPDS-NG). But GAO cannot confirm the extent that personal services contacts are awarded by the Department of Defense (DOD) because GAO identified significant reporting errors at two DOD agencies—the Air Force and the Army. Specifically, 4 of the 15 Air Force contracts and 13 of the 15 Army contracts GAO reviewed were incorrectly recorded in FPDS-NG as personal services contracts. Defense officials agreed with this assessment. Further, the fiscal year 2014 inventories of contracted services at Air Force, Army, and Navy contained personal services contracts not captured in FPDS-NG, as shown in the figure below. Apart from the inaccuracies of the reported data, GAO observed and agency officials agreed that additional undercounting could exist since some contracts for nonpersonal services could become personal services contracts, depending on whether the contract involves direct supervision by government employees. In the absence of accurate data, proper management of personal services and other contracts becomes more difficult. Military departments and USAID use personal services contracts differently. DOD personal services contracts GAO reviewed were mostly for health care services. As permitted under its regulations, USAID uses personal services contracts for a broader range of functions such as program management, security analysis, and logistics, some of which are considered tasks that only government employees should perform—inherently governmental activities. Federal regulations that prohibit contractors from performing such activities do not apply to authorized personal services contracts. DOD's practice is not to use personal services contracts for inherently governmental tasks. DOD and USAID have multiple authorities for awarding personal services contracts, but none of the files GAO reviewed at USAID cited the correct authority for personal services contracts performed in the United States. USAID has taken steps to address this issue but has not yet determined whether these steps will be effective. GAO recommends that the Secretary of Defense direct the Air Force and Army to take steps to ensure the accurate recording of personal services contracts in FPDS-NG; and that USAID ensure the correct authority is cited for personal services contracts performed in the United States. DOD and USAID concurred with our recommendations.
As the central human resources agency for the federal government, OPM is tasked with ensuring that the government has an effective civilian workforce. To carry out this mission, OPM delivers human resources products and services including policies and procedures for recruiting and hiring, provides health and training benefit programs, and administers the retirement program for federal employees. According to the agency, approximately 2.7 million active federal employees and nearly 2.5 million retired federal employees rely on its services. The agency’s March 2008 analysis of federal employment retirement data estimates that nearly 1 million active federal employees will be eligible to retire and almost 600,000 will most likely retire by 2016. According to OPM, the retirement program serves current and former federal employees by providing (1) tools and options for retirement planning and (2) retirement compensation. Two defined-benefit retirement plans that provide retirement, disability, and survivor benefits to federal employees are administered by the agency. The first plan, the Civil Service Retirement System (CSRS), provides retirement benefits for most federal employees hired before 1984. The second plan, the Federal Employees Retirement System (FERS), covers most employees hired in or after 1984 and provides benefits that include Social Security and a defined contribution system. OPM and employing agencies’ human resources and payroll offices are responsible for processing federal employees’ retirement applications. The process begins when an employee submits a paper retirement application to his or her employer’s human resources office and is completed when the individual begins receiving regular monthly benefit payments as calculated by OPM. Processing retirement claims includes functions such as determining retirement eligibility, inputting data into benefit calculators, and providing customer service. To do so, the agency uses over 500 different procedures, laws, and regulations, which are documented on its internal website. For example, the site contains memorandums that outline new procedures for handling special retirement applications, such as those for disability or court orders. In addition, OPM’s retirement processing involves the use of over 80 information systems that have approximately 400 interfaces with other internal and external systems. OPM has stated that the federal employee retirement process does not provide prompt and complete benefit payments upon retirement, and that customer service expectations for more timely payments are increasing. The agency also reports that a greater workload is expected due to an anticipated increase in the number of retirement applications over the next decade, although current retirement processing operations are at full capacity. Further, the agency has identified several factors that limit its ability to process retirement benefits in an efficient and timely manner. Specifically, OPM noted that: current processes are paper-based and manually intensive, resulting in a higher number of errors and delays in providing benefit payments; the high costs, limited capabilities, and other problems with the existing information systems and processes pose increasing risks to the accuracy of benefit payments; current manual capabilities restrict customer service; federal employees have limited access to retirement records, making planning for retirement difficult; and attracting qualified personnel to operate and maintain the antiquated retirement systems, which have about 3 million lines of custom programming, is challenging. Recognizing the need to modernize its retirement processing, in the late 1980s OPM began initiatives that have called for automating its antiquated paper-based processes. Initial modernization visions called for developing an integrated system and automated processes to provide prompt and complete benefit payments. However, following attempts over more than two decades, the agency has not yet been successful in achieving the modernized retirement system that it envisioned. In early 1987, OPM began a program called the FERS Automated Processing System (FAPS). However, after 8 years of planning, the agency decided to reevaluate the program and the Office of Management and Budget requested an independent review of the program that identified various management weaknesses. The independent review suggested areas for improvement and recommended terminating the program if immediate action was not taken. In mid-1996, OPM terminated the program. In 1997, OPM began planning a second modernization initiative, called the Retirement Systems Modernization (RSM) program. The agency originally intended to structure the program as an acquisition of commercially available hardware and software that would be modified in-house to meet its needs. From 1997 to 2001, OPM developed plans and analyses and began developing business and security requirements for the program. However, in June 2001, it decided to change the direction of the retirement modernization initiative. In late 2001, retaining the name RSM, the agency embarked upon its third initiative to modernize the retirement process and examined the possibility of privately sourced technologies and tools. Toward this end, the agency determined that contracting was a viable alternative and, in 2006, awarded three contracts for the automation of the retirement process, to include the conversion of paper records to electronic files and consulting services to redesign its retirement operations. In February 2008, OPM renamed the program RetireEZ and deployed an automated retirement processing system. However, by May 2008 the agency determined that the system was not working as expected and suspended system operation. In October 2008, after 5 months of attempting to address quality issues, the agency terminated the contract for the system. In November 2008, OPM began restructuring the program and reported that its efforts to modernize retirement processing would continue. However, after several years of trying to revitalize the program, the agency terminated retirement system modernization in February 2011. OPM’s efforts to modernize its retirement system have been hindered by weaknesses in several key project management disciplines. Our experience with major modernization initiatives has shown that having sound IT management capabilities is essential to achieving successful outcomes. Among others, these capabilities include project management, risk management, organizational change management, system testing, cost estimating, progress reporting, planning, and oversight. However, we found that many of the capabilities in these areas were not sufficiently developed. For example, in reporting on RSM in February 2005, we noted weaknesses in key management capabilities, such as project management, risk management, and organizational change management. Project management is the process for planning and managing all project-related activities, including defining how project components are interrelated. Effective project management allows the performance, cost, and schedule of the overall project to be measured and controlled in comparison to planned objectives. Although OPM had defined major retirement modernization project components, it had not defined the dependencies among them. Specifically, the agency had not identified critical tasks and their impact on the completion of other tasks. By not identifying critical dependencies among retirement modernization components, OPM increased the risk that unforeseen delays in one activity could hinder progress in other activities. Risk management is the process for identifying potential problems before they occur. Risks should be identified as early as possible, analyzed, mitigated, and tracked to closure. OPM officials acknowledged that they did not have a process for identifying and tracking retirement modernization project risks and mitigation strategies on a regular basis but stated that the agency’s project management consultant would assist it in implementing a risk management process. Without such a process, OPM did not have a mechanism to address potential problems that could adversely impact the cost, schedule, and quality of the retirement modernization project. Organizational change management is the process of preparing users for the changes to how their work will be performed as a result of a new system implementation. Effective organizational change management includes plans to prepare users for impacts the new system might have on their roles and responsibilities, and a process to manage those changes. Although OPM officials stated that change management posed a substantial challenge to the success of retirement modernization, they had not developed a detailed plan to help users transition to different job responsibilities. Without having and implementing such a plan, confusion about user roles and responsibilities could have hindered effective implementation of new retirement systems. We recommended that the Director of OPM ensure that the retirement modernization program office expeditiously establish processes for effective project management, risk management, and organizational change management. In response, the agency initiated steps toward establishing management processes for retirement modernization and demonstrated activities to address our recommendations. We again reported on OPM’s retirement modernization in January 2008, as the agency was on the verge of deploying a new automated retirement processing system. We noted weaknesses in additional key management capabilities, including system testing, cost estimating, and progress reporting. Effective testing is an essential activity of any project that includes system development. Generally, the purpose of testing is to identify defects or problems in meeting defined system requirements or satisfying system user needs. At the time of our review, 1 month before OPM planned to deploy a major system component, test results showed that the component had not performed as intended. We warned that until actual test results indicated improvement in the system, OPM risked deploying technology that would not accurately calculate retirement benefits. Although the agency planned to perform additional tests to verify that the system would work as intended, the schedule for conducting these tests became compressed from 5 months to 2-1/2 months, with several tests to be performed concurrently rather than in sequence. The agency identified a lack of testing resources, including the availability of subject matter experts, and the need for further system development as contributing to the delay of planned tests and the need for concurrent testing. The high degree of concurrent testing that OPM planned to meet its February 2008 deployment schedule increased the risk that the agency would not have the resources or time to verify that the planned system worked as expected. Cost estimating represents the identification of individual project cost elements, using established methods and valid data to estimate future costs. The establishment of a reliable cost estimate is important for developing a project budget and having a sound basis for measuring performance, including comparing the actual and planned costs of project activities. Although OPM developed a retirement modernization cost estimate, the estimate was not supported by the documentation that is fundamental to a reliable cost estimate. Without a reliable cost estimate, OPM did not have a sound basis for formulating retirement modernization budgets or for developing the cost baseline that is necessary for measuring and predicting project performance. Earned value management (EVM) is a tool for measuring program progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Fundamental to reliable EVM is the development of a baseline against which variances are calculated. OPM used EVM to measure and report monthly performance of the retirement modernization system. The reported results provided a favorable view of project performance over time because the variances indicated the project was progressing almost exactly as planned. However, this view of project performance was not reliable because the baseline on which it was based did not reflect the full scope of the project, had not been validated, and was unstable (i.e., subject to frequent changes). This EVM approach in effect ensured that material variances from planned project performance would not be identified and that the state of the project would not be reliably reported. We recommended that the Director of OPM address these deficiencies by, among other things, conducting effective system tests prior to system deployment, in addition to improving program cost estimation and progress reporting. In response to our report, OPM stated that it concurred with our recommendations and stated that it would take steps to address the weakness we identified. Nevertheless, OPM deployed a limited initial version of the modernized retirement system in February 2008. After unsuccessful efforts to address system quality issues, the agency suspended system operation, terminated the system contract, and began restructuring the modernization effort. In April 2009, we again reported on OPM’s retirement modernization, noting that the agency still remained far from achieving the modernized retirement processing capabilities that it had planned. Specifically, we noted that significant weaknesses continued to exist in three key management areas that we had previously identified—cost estimating, progress reporting, and testing—while also noting two additional weaknesses related to planning and oversight. Despite agreeing with our January 2008 recommendation that OPM develop a revised retirement modernization cost estimate, the agency had not completed initial steps for developing a new cost estimate by the time we reported again in April 2009. At that time, we reported that the agency had not yet fully defined the estimate’s purpose, developed an estimating plan, or defined the project’s characteristics. By not completing these steps, OPM increased the risk that it would produce an unreliable estimate and not have a sound basis for measuring project performance and formulating retirement modernization budgets. Although it agreed with our January 2008 recommendation to establish a basis for effective EVM, OPM had not completed key steps as of the time of our April 2009 report. Specifically, despite planning to begin reporting on the retirement project’s progress using EVM, the agency was not prepared to do so because initial steps, including the development of a reliable cost estimate and the validation of a baseline, had not been completed. Engaging in EVM reporting without first performing these fundamental steps could have again rendered the agency’s assessments unreliable. As previously discussed, effective testing is an essential component of any project that includes developing systems. To be effectively managed, testing should be planned and conducted in a structured and disciplined fashion. Beginning the test planning process in the early stages of a project life cycle can reduce rework later. Early test planning in coordination with requirements development can provide major benefits. For example, planning for test activities during the development of requirements may reduce the number of defects identified later and the costs related to requirements rework or change requests. OPM’s need to compress its testing schedule and conduct tests concurrently, as we reported in January 2008, illustrates the importance of planning test activities early in a project’s life cycle. However, at the time of our April 2009 report, the agency had not begun to plan test activities in coordination with developing its requirements for the system it was planning at that time. Consequently, OPM increased the risk that it would again deploy a system that did not satisfy user expectations and meet requirements. Project management principles and effective practices emphasize the importance of having a plan that, among other things, incorporates all the critical areas of system development and is to be used as a means of determining what needs to be done, by whom, and when. Although OPM had developed a variety of informal documents and briefing slides that described retirement modernization activities, the agency did not have a complete plan that described how the program would proceed in the wake of its decision to terminate the system contract. As a result, we concluded that until the agency completed and used a plan that could guide its efforts, it would not be properly positioned to move forward with its restructured retirement modernization initiative. Office of Management and Budget and GAO guidance calls for agencies to ensure effective oversight of IT projects throughout all life- cycle phases. Critical to effective oversight are investment management boards made up of key executives who regularly track the progress of IT projects such as system acquisitions or modernizations. OPM’s Investment Review Board was established to ensure that major investments are on track by reviewing their progress and determining appropriate actions when investments encounter challenges. Despite meeting regularly and being provided with information that indicated problems with the retirement modernization, the board did not ensure that retirement modernization investments were on track, nor did it determine appropriate actions for course correction when needed. For example, from January 2007 to August 2008, the board met and was presented with reports that described problems the retirement modernization program was facing, such as the lack of an integrated master schedule and earned value data that did not reflect the “reality or current status” of the program. However, meeting minutes indicated that no discussion or action was taken to address these problems. According to a member of the board, OPM guidance regarding how the board is to communicate recommendations and needed corrective actions for investments it is responsible for overseeing had not been established. Without a fully functioning oversight body, OPM could not monitor the retirement modernization and make the course corrections that effective boards are intended to provide. Our April 2009 report made new recommendations that OPM address the weaknesses in the retirement modernization project that we identified. Although the agency began taking steps to address them, the recommendations were overtaken by the agency’s decision in February 2011 to terminate the retirement modernization project. In November 2011, agency officials, including the Chief Information Officer, Chief Operating Officer, and Associate Director for Retirement Services, told us that OPM does not plan to initiate another large-scale effort to modernize the retirement process. Rather, the officials said the agency intends to take targeted steps to improve retirement processing that will include hiring and training approximately 100 new staff to help improve the timeliness of processing retirement applications and responding to retirement claims; demonstrating the capability to automate retirement applications; working with other agencies to improve the quality of electronic data they transmit to OPM for use in retirement processing; and improving OPM’s retirement services website to allow enhanced communication. Under this approach, OPM does not currently have plans to modernize the existing, antiquated retirement systems that the agency has long identified as necessary to accomplishing retirement modernization and improving the timeliness and accuracy of benefit payments. In summary, despite OPM’s recognition of the need to improve the timeliness and accuracy of retirement processing, the agency has thus far been unsuccessful in several attempts to develop the capabilities it has long sought. For over two decades, the agency’s retirement modernization efforts have been plagued by weaknesses in management capabilities that are critical to the success of such endeavors. Among the management disciplines the agency has struggled with are project management, risk management, organizational change management, cost estimating, system testing, progress reporting, planning, and oversight. Even though the agency is now considering only modest efforts to improve retirement processing, the development and institutionalization of these management capabilities is key to the success of any future retirement modernization or other IT initiative that OPM undertakes. Mr. Chairman, this concludes my statement today. I would be pleased to answer any questions that you or other members of the Subcommittee may have at this time. If you have any questions concerning this statement, please contact Valerie C. Melvin, Director, Information Management and Technology Resources Issues, at (202) 512-6304 or melvinv@gao.gov. Other individuals who made key contributions include Mark T. Bird, Assistant Director; Larry E. Crosland; Lee A. McCracken; Teresa M. Neven; and Charles E. Youman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Office of Personnel Management (OPM) is the central human resources agency for the federal government and, as such, is tasked with ensuring the government has an effective civilian workforce. As part of its mission, OPM defines recruiting and hiring processes and procedures; provides federal employees with various benefits, such as health benefits; and administers the retirement program for federal employees. The use of information technology (IT) is crucial in helping OPM to carry out its responsibilities, and in fiscal year 2011 the agency invested $79 million in IT systems and services. For over 2 decades, OPM has been attempting to modernize its federal employee retirement process by automating paper-based processes and replacing antiquated information systems. However, these efforts have been unsuccessful, and OPM canceled its most recent retirement modernization effort in February 2011. GAO was asked to provide a statement summarizing its work on challenges OPM has faced in managing its efforts to modernize federal employee retirement processing. To do this, GAO relied on previously published work as well as a limited review of more recent documentation on OPM's retirement modernization activities. In a series of reviews, GAO found that OPM's efforts to modernize its retirement system have been hindered by weaknesses in several important management disciplines that are essential to successful IT modernization efforts. For example, in 2005, GAO made recommendations to address weaknesses in the following areas: (1) Project management. While OPM had defined major retirement modernization components, it had not identified the dependencies among them, increasing the risk that delays in one activity could hinder progress in others. (2) Risk management. OPM did not have a process for identifying and tracking project risks and mitigation strategies on a regular basis. This meant that OPM lacked a mechanism to address potential problems that could adversely impact the retirement modernization effort's cost, schedule, and quality. (3) Organizational change management. OPM had not developed a detailed plan to help users transition to different job responsibilities in response to the deployment of the new system, which could lead to confusion about roles and responsibilities, hindering effective system implementation. In 2008, as OPM was on the verge of deploying its automated retirement processing system, GAO reported deficiencies and made recommendations to improve key management capabilities in additional areas: (1) Testing. Test results 1 month prior to the deployment of a major system component showed that it had not performed as intended. The defects, along with a compressed testing schedule, increased the risk that the deployed system would not work as intended. (2) Cost estimating. The cost estimate OPM had developed was not supported by documentation necessary to its reliability. This meant that OPM did not have a sound basis for formulating budgets or developing a cost baseline for the program. (3) Earned value management, which is a tool for measuring program progress. The baseline against which OPM was measuring program progress did not reflect the full scope of the project, meaning that variances from planned performance would not be identified. In 2009, GAO reported that OPM continued to face challenges in cost estimating, earned value management, and testing and made recommendations to address these deficiencies as well as additional weaknesses in planning and overseeing the retirement modernization effort. Although OPM agreed with GAO's recommendations and had begun to address them, the agency terminated the retirement modernization effort in February 2011. The agency has since stated that it does not plan to undertake another large-scale retirement modernization, but instead plans targeted steps to improve retirement processing, such as hiring new staff and working to improve data quality. Nonetheless, the development and institutionalization of the capabilities GAO recommended to address these weaknesses remains key to the success of any future IT initiatives that OPM undertakes. GAO is not making new recommendations at this time. As noted, GAO has previously made numerous recommendations to address the challenges OPM has faced in carrying out its retirement modernization efforts.
The procedures the Bureau used during the 1990 Census to count people without conventional housing had limitations that resulted in incomplete data. To address these limitations and help improve the quality of the data, the Bureau used a procedure for the 2000 Census called Service-Based Enumeration that attempted to count people where they receive services such as emergency shelters, soup kitchens, and regularly scheduled mobile food vans. Service-Based Enumeration also counted people in targeted nonsheltered outdoor locations such as encampments beneath bridges. The operation occurred from March 27 through March 29, 2000. According to Bureau officials, Service-Based Enumeration was not designed, and was never intended, to provide a specific count of homeless persons. Instead, the operation was part of a larger effort to count people without conventional housing, including people in “institutional group quarters” such as correctional facilities, nursing homes, and mental hospitals, and “non-institutional group quarters” such as college dormitories, military quarters, and group homes. Service-Based Enumeration counted people in specific categories of noninstitutional group quarters. To help ensure a complete count of people without conventional housing, the Bureau partnered with local governments and community advocacy groups to obtain lists of service locations and to assist with the enumeration. In some cases, the Bureau hired clients of the advocacy groups and other people trusted by the homeless to conduct Service-Based Enumeration. For example in Atlanta, an advocacy group for homeless veterans helped the Bureau employ homeless veterans to improve the count of this population. Local governments helped the Bureau as well, often investing considerable resources. For example, Los Angeles paid to keep its city-run shelters open on the night they were enumerated so that people using their services could be counted. To address your concerns about the Bureau’s dissemination of data on persons without conventional housing, we agreed to examine (1) the Bureau’s plans for reporting the results of Service-Based Enumeration and its reasons for changing those plans and (2) the Bureau’s protocols for releasing data. To accomplish these objectives, we interviewed key Bureau officials and reviewed relevant Bureau documents and data such as operational plans, decision memorandums, and the Bureau’s partnership program evaluation. In order to obtain the perspective of data users, partners, and stakeholders, we conducted in-person and telephone interviews with homeless advocates, local government officials, and representatives of public service agencies in New York City, Los Angeles, Cleveland, Atlanta, and Washington, D.C. These cities had large numbers of people without conventional housing and they were actively involved with the Bureau during the 2000 Census. The organizations we contacted also provided relevant documentation, such as comprehensive file documents relating to partnership activities. In addition to the above locations, we did our audit work at Bureau headquarters in Suitland, Maryland. Our audit work was conducted from April 2002 through September 2002 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Commerce. On November 21, 2002, the Secretary forwarded the Bureau’s written comments on the draft (see app. I). We address these comments at the end of this report. Under the Bureau’s original plan for releasing Service-Based Enumeration data in Summary File-1 (SF-1), the emergency and transitional shelter count was one of several categories of noninstitutional group quarters data that were to be reported separately. Other people counted in the Service- Based Enumeration, including people counted at targeted nonsheltered outdoor locations, soup kitchens, and regularly scheduled mobile food vans, were to be combined and reported under the category “other non- institutional group quarters.” This category also included residential care facilities providing protective oversight, shelters against domestic violence, staff dormitories for nurses and interns at military and general hospitals, and living quarters for victims of natural disasters. This decision was documented in an April 1999 internal memorandum from the Bureau’s Assistant Division Chief for Special Population Statistics to the Assistant Division Chief for Census Programs. The Service-Based Enumeration operation took place a year later, in March 2000. The April 1999 plan was in large part a reaction to the challenges the Bureau faced counting the emergency shelter and street population during the 1990 Census. Although the Bureau disseminated separate counts of people found at emergency shelters, preidentified street locations, and similar sites, the counts proved to be incomplete. Moreover, the Bureau stated in its October 2001 report that despite its warnings to the contrary, the data were sometimes misinterpreted as a “homeless” count. The October report does not offer an example of this, but the misinterpretation clearly played a role in a lawsuit against the Bureau. As a result, when designing the 2000 census, the Bureau attempted to both improve the count and take precautions to ensure that the Service-Based Enumeration count would not be misconstrued as a count of the homeless. The Bureau’s data dissemination plans took into account the recommendations of the Commerce Secretary’s 2000 Census Advisory Committee, a panel that included representatives of advocacy and other groups (including representatives from organizations that represent local governments) that met periodically to review the Bureau’s plans. The homeless population was represented by the National Coalition for the Homeless—an advocacy group that coordinates a network of 300 state and local housing and homeless organizations. In its January 1999 final report, the Census 2000 Advisory Committee recommended that special attention be paid to tabulating the results of Service-Based Enumeration and targeted outdoor enumerations so that they could not be aggregated and used as a homeless count. In January 2001, 5 months before the SF-1 release, the Bureau reversed its April 1999 decision to release emergency and transitional shelter data separately because of “data quality concerns.” Instead, as shown in figure 1, the Bureau planned to combine the emergency and transitional shelter data with the “other non-institutional group quarters.” This category contained data on a variety of living arrangements including facilities for natural disaster victims. The Bureau’s decision was contained in an internal Bureau memorandum from the Chief of the Population Division to the Chief of the Decennial Systems and Contracts Management Office. Bureau officials told us that the decision to exclude a separate emergency and transitional shelter count in SF-1 was made between December 2000 and January 2001, by the Director of the Decennial Census with input from the Associate Director Decennial Census, the Population Division, the Associate Director for Demographic Programs, the Decennial Management Division, and the Decennial Statistical Studies Division. According to Bureau officials, their concerns focused on the accuracy of a new statistical procedure called “multiplicity estimation” that adjusted the number counted to better reflect the number of actual shelter users. Because Service-Based Enumeration only counted people who were at these facilities on the day of enumeration, the Bureau intended to use multiplicity estimation to calculate the number of people who used these facilities but were not present during Service-Based Enumeration. The multiplicity estimation procedure was based on information from those who were counted and on the number of times they used the service facilities in the prior week. An estimate of people not counted on the day of enumeration was added to the count of people who were. According to the Bureau, the multiplicity estimates tested well during the 1998 dress rehearsal for the 2000 Census possibly because the three rehearsal sites did not offer large enough sample sizes of the appropriate populations to adequately test this procedure. However, during the 2000 Census the Bureau found that a census question pertaining to facility usage upon which the multiplicity estimates were based had a low response rate. Moreover, the Bureau found that respondents, particularly in shelters, did not answer the question accurately. Due to data quality concerns, the Bureau decided not to use multiplicity estimation to adjust the data and consequently decided not to report the data separately. Bureau officials said they did not announce the change in plans because they were still evaluating the problems with the data. It was not until June 2000 that the Bureau began recalculating the data and making a final decision on which categories to aggregate. Ultimately, the Bureau did not report any of the Service-Based Enumeration data separately in SF-1. Emergency and transitional shelter data were the only data that were to be released in SF-1 under separate reporting categories that the Bureau decided to combine with another category. The release of the SF-1 data in June 2001 produced public discussion in the press, among census partners, and in the Congress about the Bureau’s decision to not separately release Service-Based Enumeration data. In a briefing for staff of the House Committee on Governmental Affairs, the Associate Director of the Decennial Census announced that the Bureau planned to produce a separate report on the emergency and transitional shelter data. In October 2001, the Bureau issued a special report, entitled Emergency and Transitional Shelter Population: 2000. This report separately identified emergency and transitional shelter data for various levels of geography down to the census tract level with 100 or more people in emergency and transitional shelters. The report did not include data for the populations in targeted nonsheltered outdoor locations, soup kitchens, regularly scheduled mobile food vans, and shelters for domestic violence. The 17-page report contains an extensive discussion on the limitations of the data. For example, the Bureau noted that the data in the report should not be construed as a count of people without conventional housing. Moreover, the emergency and transitional shelter data at the census tract level are not in the hard copy, but rather in the Internet version of the report. The Bureau stated that all Census 2000 data at the tract level are available on the Internet and are not available in printed reports. The October report contains most of the same data that were to be released under the April 1999 dissemination plan for SF-1. The Bureau asserted that the data quality concerns with the emergency and transitional shelter data (cited when it changed the plan to release these data in SF-1) required that the data be presented in a manner that allowed the Bureau to clearly outline the data’s limitations. The October 2001 report contained an extended discussion of these limitations. The October 2001 report also identified reasons the Bureau did not (and never planned to) separately release data on people counted at targeted nonsheltered outdoor locations, soup kitchens, regularly scheduled mobile food vans, and shelters for victims of domestic violence, including the following. People without conventional housing who were at outside locations other than the targeted nonsheltered outdoor locations identified for the census were not included in the TNSOL operation. For the purposes of the TNSOL operation, the definition of “outdoor” excluded both mobile and transient locations used by people experiencing homelessness as well as abandoned buildings. The option was given to the individuals found at soup kitchens and regularly scheduled mobile food vans to select “usual home elsewhere.” For example, if an individual enumerated at a soup kitchen listed a usual home elsewhere, then that person was tabulated at their usual residence and not at the service location. Therefore, the data on this population would not reflect a true count of the individuals using these services. Prior to publicly releasing the October report, the Bureau asked two representatives from the National Coalition for the Homeless to review a draft of the portion of the report that described the limitations of the data. The National Coalition for the Homeless commented extensively on the section containing the caveats and limitations in order to strengthen the report. A member of the Board of Directors for the National Coalition for the Homeless told us that he provided this feedback both as an academician and a stakeholder. Bureau officials stated that because of its position on the Bureau’s Census Advisory Committee, the National Coalition for the Homeless was the only advocacy group that reviewed any portion of the October 2001 report prior to its publication. The controversy surrounding the release of the combined Service-Based Enumeration data highlights the challenges the Bureau faced in 2000 trying to meet the needs of various data users and the work the Bureau still needs to do when planning for the 2010 Census to better reconcile those needs. For example, several organizations we contacted favored the separate release of the Service-Based Enumeration data categories. Indeed, local government officials we talked to in New York City believed that the data would help with grant applications, projections about future service needs, and determining their success in getting people off the streets and into shelters. The Executive Director of the Northeast Ohio Coalition for the Homeless stated that the city of Cleveland does not do its own count of this population and, therefore, the Bureau numbers are the only ones available on this segment of the population. Los Angeles city officials wanted the Service-Based Enumeration data so they could better target their services and, like Cleveland, Los Angeles did not have its own data. Several of these entities stated that the potential misuse of data was not a valid reason for not separately releasing data. In addition, the majority of the organizations we contacted partnered with the Bureau anticipating that they would be able to use the Service-Based Enumeration data to evaluate whether improvements were made in enumerating local populations without conventional housing in 2000 compared to 1990. The Assistant City Attorney of Los Angeles estimated that Los Angeles spent about $300,000 on the effort to improve the count of Los Angeles’s people without conventional housing. For example, as part of an extensive effort to help the Bureau develop a list of targeted nonsheltered outdoor locations, the city provided senior Bureau staff with a helicopter tour over some outdoor locations where people without conventional housing lived. The Assistant City Attorney of Los Angeles stated that she believed the city would get the targeted nonsheltered outdoor locations data that they helped collect and wanted to review. In addition, because of the Bureau’s focus on counting people at shelters, the city kept shelters open on the night of the enumeration at its own expense even though shelters in Los Angeles typically do not have many people during warm weather. Los Angeles expected to have detailed data to use to evaluate the effectiveness of its resource allocation. However, the National Coalition for the Homeless and other advocates of the homeless opposed the separate release of any of the Service-Based Enumeration data. They were concerned that these data could be misused as a count of the homeless population and lead to flawed decision-making by policymakers. Ultimately, the Bureau left a number of data users unsatisfied. Those who wanted the Service-Based Enumeration categories released separately did not feel the Bureau met their expectations with the data released in SF-1 or with the release of the October report. Users who opposed the separate release of the data and were pleased that SF-1 combined the Service-Based Enumeration components with other data were displeased that the October 2001 report was released. The difficulties the Bureau experienced trying to reconcile the competing needs and interests of data users illustrates the importance of effective communication between the Bureau and its key data users and partners to ensure no expectation gaps develop. More than just a good business practice, federal internal control standards require agencies to have effective external communications with groups that can have a serious impact on programs, projects, operations, and other activities. However, our conversations with several Bureau partners and our review of Bureau documents suggest that communications were sometimes vague and insufficient. For example, although the April 1999 memorandum that outlined the Bureau’s initial data dissemination plans was written a year before the 2000 Census, this information may not have been effectively communicated to the Bureau’s partners. Indeed, at a Capitol Hill briefing on this topic in June 2001, Bureau officials themselves acknowledged that they did not do a good job of communicating on this issue. Some of the partners we spoke to indicated that had they known earlier about the Bureau’s plans to limit the release of Service-Based Enumeration data they might have focused their resources on different census operations. Further, our review of Bureau documents indicated that the information on the “official plan” for the release of the different Service-Based Enumeration categories of data was limited and inconsistent. Some partners stated that they did not know that the Bureau never intended to report the targeted nonsheltered outdoor location data. Although the Bureau made numerous presentations on Service-Based Enumeration that emphasized there would be no count of the homeless, the Bureau provided little detail on how components of Service-Based Enumeration would actually be presented. In the absence of clear communication from the Bureau, partners developed their own expectations of what would be released. Several of the local officials and advocates that we spoke to expected that the data would be released in the same detail as it was in 1990, because they were not told otherwise. For example, a Los Angeles government official said that the Bureau stated it would not provide a homeless count in 1990, but it still released the street count data separately. By focusing resources on counting specific categories of the population, the Bureau may have created expectations that there would be a count of that population. A cause of the Bureau’s shifting position on reporting the components of Service-Based Enumeration appears to be its lack of documented, clear, transparent, and consistently applied guidelines governing the release of data from the 2000 Census. Except for some guidance aimed at protecting the confidentiality of census records, the Bureau had few written guidelines on the level of quality needed to release data to the public. Had these guidelines been in place during the decennial census, they could have informed the Bureau’s decision on whether to release the Service-Based Enumeration data, how to characterize these data, and help defend the decision after it was made. Such guidelines could also provide a basis ahead of time for expectations about the conditions under which data will or will not be released. Although Bureau officials emphasized that the Bureau has a long tradition of high standards and procedures that yield quality data (to its credit, the Bureau’s quality assurance practices identified the problem with the multiplicity estimator), the officials acknowledged that these standards were primarily part of the agency’s institutional knowledge. The written guidance that did exist appeared to be vague and insufficient for making consistent decisions on the quality thresholds needed for releasing data to the public, and the circumstances under which it might be appropriate to suppress certain data. According to the Bureau’s Associate Director for Methodology and Standards, a technical paper issued in 1974 and revised in 1987 contained the Bureau’s only written guidelines for discussion and presentation of errors in data. This paper noted that, “stimates for individual cells of a published table should not be suppressed solely because they are subject to large sampling errors or large nonsampling variances, provided users are given adequate caution of the lack of reliability of the data. On the other hand, data known to have very serious bias may be suppressed.” In 2000, the Bureau initiated a new quality assurance program to document Bureau-wide protocols designed to ensure the quality of data collected and disseminated by the Bureau. The Bureau’s Methodology and Standards Council is charged with setting statistical and survey quality standards and guidelines for Bureau surveys and censuses. In support of this role, the council has established a quality framework in which the demographic, economic, and decennial areas can share and support common principles, standards, and guidelines. The quality framework covers eight unique areas, one of which is dissemination. Because this Bureau program is in its initial stages, we could not evaluate it. However, Bureau officials believe that the program is a significant first step in addressing the lack of agencywide written guidelines for releasing data. The initiative appears to be consistent with Office of Management and Budget guidelines issued in February 2002 requiring federal agencies to issue their own guidance for ensuring and maximizing the quality, objectivity, utility, and integrity of information disseminated by the agency. As the Bureau develops its guidelines, it will be important that they be well documented, transparent, clearly defined, and consistently applied. Although Service-Based Enumeration was designed to address the challenges the Bureau encountered during the 1990 Census in obtaining a complete count of people without conventional housing, the Bureau’s experience during the 2000 Census suggests that tallying this population group remains problematic. Moreover, the Bureau’s difficulties were compounded by its shifting position on how to report the data once they were collected. A number of government, community, and advocacy organizations helped the Bureau enumerate this population group. However, the Bureau, by first planning to release the data one way, then changing the decision, and ultimately releasing the data anyway—all for reasons that were not clearly articulated to the Bureau’s stakeholders— raised questions about the Bureau’s decision-making on data quality issues. As noted at the beginning of this report, related questions have also been raised about how the Bureau collected and reported data on Hispanic subgroups. To the extent similar incidents occur in the future, they could undermine public confidence in the accuracy and credibility of Bureau data. Thus, as the Bureau plans for the 2010 Census, it will be important for it to refine its methods for enumerating people living in unconventional housing and reporting the resulting data, in part by properly testing and evaluating those methods. As noted earlier, the Bureau could not properly test a key statistical technique during the census dress rehearsal because the sample size was too small. Moreover, while addressing the competing needs and desires of data users will likely remain a considerable challenge, it will be important for the Bureau to more effectively articulate its plans to avoid the expectation gaps that occurred during 2000. The Bureau’s plans for collecting data on persons without conventional housing need to specify how the Bureau plans to separately report these data. Bureau-wide guidelines on the level of quality needed to release data to the public, on how and when to document data limitations, and on the circumstances under which it is acceptable to suppress data, could help the Bureau be more accountable and consistent in its dealings with data users and stakeholders, and help ensure that the Bureau’s decisions both are, and appear to be, totally objective. To ensure that the 2010 Census will provide public data users with more complete, accurate, and useful information on the segment of the population without conventional housing, we recommend that the Secretary of Commerce direct the Director of the Bureau of the Census to do the following. 1. Ensure that all procedures for enumerating and estimating segments of the population without conventional housing are properly tested and evaluated under conditions as similar to the census as possible. 2. Develop agencywide guidelines for Bureau decisions on the level of quality needed to release data to the public, how to characterize any limitations in the data, and when it is acceptable to suppress the data for reasons other than protecting the confidentiality of respondents. Ensure that these guidelines are documented, transparent, clearly defined, and consistently applied. 3. Ensure that the Bureau’s plans for releasing data are clearly and consistently communicated with the public. The Secretary of Commerce forwarded written comments from the Bureau of the Census on a draft of this report (see app. I). The Bureau agreed with each of our recommendations and, as indicated in the letter, is taking steps to implement them. However, it expressed several general concerns about our findings. The Bureau’s principal concerns and our response are presented below. The Bureau also suggested minor wording changes to provide additional context and clarification. We accepted the Bureau’s suggestions and made changes to the text as appropriate. The Bureau took exception to our findings concerning the adequacy of its data quality guidelines, noting that the Bureau’s decisions regarding the release and characterization of emergency and transitional shelter data were based on established guidelines for data quality. However, the Bureau did not cite any written guidelines to support its position. As noted in our report, Bureau officials, including the Associate Director for Methodology and Standards, told us that the Bureau had few written guidelines, standards, or procedures related to the quality of data released to the public. In this report we acknowledge the Bureau’s tradition of high standards and procedures that yield quality data. However, according to the Bureau, these standards are generally undocumented and part of the agency’s institutional knowledge. To provide a basis for consistent decision-making and clear communication within the Bureau and to the public, guidelines on the quality of data released to the public must be fully documented, transparent, clearly defined, and consistently applied. Additionally, the Bureau said that when data do not meet an acceptable level of quality, it considers various options for modifying its dissemination plans. The Bureau’s decision to delay the release of the emergency and transitional shelter data may have been entirely appropriate. Our concern is not that the Bureau changed its plans, but that it could not provide us its guidelines for determining an acceptable level of quality or clearly indicate how it determined that the data did not meet minimal quality standards for release in SF-1. The Bureau also commented that its decisions regarding the distribution of data from SF-1 were well publicized and that the only change in Bureau plans for the release of Service-Based Enumeration data was the decision to delay release of the emergency and transitional shelter data. This report focused on the changing plans for the release of the emergency and transitional shelter data and noted that the Bureau never intended to release any other data from the Service-Based Enumeration. However, we found that the Bureau did not effectively communicate its decisions with its partners or the public. Decisions on the release of the emergency and transitional shelter data were contained in internal decision memoranda. We found that these decisions were not always reflected in new releases of the SF-1 documentation. Although Bureau officials told us that they always intended to produce a separate report on emergency and transitional shelter data, they did not make this intention public when the SF-1 data were released. Some stakeholders did not realize that the Bureau was not releasing emergency and transitional shelter data with SF-1 until they examined the SF-1 data. As we stated in our report, these communication problems can undermine stakeholder and public confidence in the Bureau and its products. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Chairman of the House Committee on Government Reform, the Chairman of the Subcommittee on Civil Service, Census and Agency Organization, the Secretary of Commerce, and the Director of the Bureau of the Census. Copies will be made available to others on request. This report will also be available at no charge on GAO’s home page at http://www.gao.gov. Please contact me on (202) 512-6806 or by e-mail at daltonp@gao.gov if you have any questions. Other key contributors to this report were Robert Goldenkoff, Timothy Wexler, Elizabeth Powell, Chris Miller, James Whitcomb, Ty Mitchell, Robert Parker, and Michael Volpe. 2000 Census: Refinements to Full Count Review Program Could Improve Future Data Quality. GAO-02-562. Washington, D.C.: July 3, 2002. 2000 Census: Coverage Evaluation Matching Implemented As Planned, but Census Bureau Should Evaluate Lessons Learned. GAO-02-297. Washington, D.C.: March 14, 2002. 2000 Census: Best Practices and Lessons Learned for a More Cost- Effective Nonresponse Follow-Up. GAO-02-196. Washington, D.C.: February 11, 2002. 2000 Census: Coverage Evaluation Interviewing Overcame Challenges, but Further Research Needed. GAO-02-26. Washington, D.C.: December 31, 2001. 2000 Census: Analysis of Fiscal Year 2000 Budget and Internal Control Weaknesses at the U.S. Census Bureau. GAO-02-30. Washington, D.C.: December 28, 2001. 2000 Census: Significant Increase in Cost Per Housing Unit Compared to 1990 Census. GAO-02-31. Washington, D.C.: December 11, 2001. 2000 Census: Better Productivity Data Needed for Future Planning and Budgeting. GAO-02-4. Washington, D.C.: October 4, 2001. 2000 Census: Review of Partnership Program Highlights Best Practices for Future Operations. GAO-01-579. Washington, D.C.: August 20, 2001. Decennial Censuses: Historical Data on Enumerator Productivity Are Limited. GAO-01-208R. Washington, D.C.: January 5, 2001. 2000 Census: Information on Short- and Long-Form Response Rates. GAO/GGD-00-127R. Washington, D.C.: June 7, 2000. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
The Bureau of the Census partnered with local governments, advocacy groups, and other organizations to help it enumerate people without conventional housing. Counting this population--which includes shelter residents and the homeless--has been a longstanding challenge for the Bureau. A number of organizations put substantial resources into an operation the Bureau called Service-Based Enumeration. In return, some expected the Bureau to provide data that would help them plan and deliver employment, health, and other services. However, the Bureau did not release the data as planned, which raised questions about the Bureau's decision-making on data quality issues. In response to a congressional request, GAO examined the Bureau's decision-making process behind its change in plans. The Bureau's original plan for releasing Service-Based Enumeration data was outlined in an April 1999 internal memorandum that called for the separate release of data on people counted at "emergency and transitional shelters." The Bureau planned to combine other components of Service-Based Enumeration, including people counted at soup kitchens, regularly scheduled mobile food vans, and certain outdoor locations, into a single category. Driving the Bureau's decision was its experience during the 1990 Census when it released separate counts of people found at shelters, on the street, and similar locations that proved to be incomplete. The Bureau also tried to ensure that the Service-Based Enumeration figures could not be used as a "homeless" count, because it was not designed to provide a specific count of the homeless. Instead, the operation was part of a larger effort to count people without conventional housing. In January 2001, the Bureau changed its earlier decision because a statistical procedure used to refine the emergency and transitional shelter data proved to be unreliable, which lowered the quality of the data. In response, the Bureau combined the shelter data with a category called "other non-institutional group quarters," a category that also includes data on people enumerated in several other group locations such as facilities for victims of natural disasters. In the fall of 2001, the Bureau produced a heavily qualified special report on the shelter data. A key cause of the Bureau's shifting position on reporting these data appears to be its lack of well documented, transparent, clearly defined, and consistently applied guidelines on the minimum quality necessary for releasing data. Had these guidelines been in place at the time of the census, the Bureau could have been better positioned to make an objective decision on releasing these figures. Additionally, the Bureau could have used the guidance to explain to data users the reasons for the decision, eliminating any appearance of censorship and arbitrariness. Because the Bureau did not always adequately communicate its plans for releasing the data, expectation gaps developed between the Bureau and entities that helped with Service-Based Enumeration.
The Federal Aviation Administration’s (FAA) primary mission is to ensure safe, orderly, and efficient air travel throughout the United States. FAA’s ability to fulfill this mission depends on the adequacy and reliability of the nation’s air traffic control (ATC) system, a vast network of computer hardware, software, and communications equipment. Sustained growth in air traffic and aging equipment have strained the current ATC system, limiting the efficiency of ATC operations. To combat these trends, in 1981 FAA embarked on an ambitious ATC modernization program. FAA estimates that it will spend about $34 billion on the program between 1982 and 2003. Our work over the years has chronicled many FAA failures in meeting ATC projects’ cost, schedule, and performance goals. As a result, we designated FAA’s ATC modernization as a high-risk information technology initiative in our 1995 report series on high-risk programs. Automated information processing and display, communication, navigation, surveillance, and weather resources permit air traffic controllers to view key information, such as aircraft location, aircraft flight plans, and prevailing weather conditions, and to communicate with pilots. These resources reside at, or are associated with, several ATC facilities—flight service stations, air traffic control towers, terminal radar approach control (TRACON) facilities, and air route traffic control centers (en route centers). These facilities’ ATC functions are described below. About 90 flight service stations provide pre-flight and in-flight services, such as flight plan filing and weather report updates, primarily for general aviation aircraft. Airport towers control aircraft on the ground and before landing and after take-off when they are within about 5 nautical miles of the airport, and up to 3,000 feet above the airport. Air traffic controllers rely on a combination of technology and visual surveillance to direct aircraft departures and approaches; maintain safe distances between aircraft; and communicate weather-related information, clearances, and other instructions to pilots and other personnel. Approximately 180 TRACONs sequence and separate aircraft as they approach and leave busy airports, beginning about 5 nautical miles and ending about 50 nautical miles from the airport, and generally up to 10,000 feet above the ground, where en route centers’ control begins. Twenty en route centers control planes over the continental United States in transit and during approaches to some airports. Each en route center handles a different region of airspace, passing control from one to another as respective borders are reached until the aircraft reaches TRACON airspace. En route center controlled airspace usually extends above 18,000 feet for commercial aircraft. En route centers also handle lower altitudes when dealing directly with a tower, or when agreed upon with a TRACON. Two en route centers—Oakland and New York—also control aircraft over the ocean. Controlling aircraft over oceans is radically different from controlling aircraft over land because radar surveillance only extends 175 to 225 miles offshore. Beyond the radars’ sight, controllers must rely on periodic radio communications through a third party—Aeronautical Radio Incorporated (ARINC), a private organization funded by the airlines and FAA to operate radio stations—to determine aircraft locations. See figure 1.1 for a visual summary of the processes for controlling aircraft over the continental United States and oceans. The ATC system of the late 1970s was a blend of several generations of automated and manual equipment, much of it labor-intensive and obsolete. FAA recognized that it could increase ATC operating efficiency by increasing automation. Additionally, FAA forecasted increased future demand for air travel, brought on by airline deregulation of the late 1970s. It also anticipated that meeting the demand safely and efficiently would require improved and expanded services, additional facilities and equipment, improved workforce productivity, and the orderly replacement of aging equipment. Accordingly, in December 1981, FAA initiated its plan to modernize, automate, and consolidate the existing ATC system by the year 2000. This ambitious modernization program includes the acquisition of new radars and automated data processing, navigation, and communication equipment in addition to new facilities and support equipment. FAA estimates that the modernization will cost over $34 billion through the year 2003 and total over 200 separate projects. ATC information systems make up a large portion of this total, accounting for 169 projects costing $20.7 billion. The Congress will have provided FAA with approximately $14.7 billion of the $20.7 billion through fiscal year 1997. Over the past 15 years, FAA’s modernization projects have experienced substantial cost overruns, lengthy schedule delays, and significant performance shortfalls. To illustrate, the long-time centerpiece of this modernization program—the Advanced Automation System—was restructured in 1994 after estimated costs tripled from $2.5 billion to $7.6 billion and delays in putting significantly less-than-promised system capabilities into operation were expected to run 8 years or more. Similarly, increases in per-unit costs for five other major ATC projects have ranged from 50 to 511 percent, and schedule delays have averaged almost 4 years. Our past work on the ATC modernization raised a number of concerns, including concerns about the reliability of projects’ cost estimates. For example, our review of FAA’s Oceanic Display and Planning System found that cost and schedule estimates were questionable because they were based solely on managers’ judgments and were not revised to reflect changing project demands and conditions. Also, our work on AAS highlighted that FAA underestimated the complexity of the system development and relied on its contractor’s deficient cost estimating systems. In its internal risk management guidance, FAA acknowledges that it has a history of unreliable project cost estimates, attributing some of its unenviable record in ATC projects’ cost growth to setting unrealistically low cost estimates, either because of poor cost estimating processes or inadequate system descriptions. Two major FAA organizations play key roles in the modernization and evolution of ATC systems—the Office of the Associate Administrator for Research and Acquisitions (ARA) and the Office of the Associate Administrator for Air Traffic Services (ATS). The first, ARA, manages the research, development, and acquisition of modernization projects. Within ARA, two groups are responsible for acquiring systems, while the others handle cross-cutting management functions (e.g., budget formulation, cost estimation, and program evaluation). Also, the William J. Hughes Technical Center is the ATC system test and evaluation facility and supports ATC systems’ research, engineering, and development. ARA employs an Integrated Product Development System (IPDS) approach. A key component of IPDS is the use of Integrated Product Teams (IPT), which are cross-functional teams aligned with major business and functional areas (i.e., en route, terminal, weather and flight services, air traffic management, oceanic, communications, navigation, surveillance, infrastructure, and information systems). IPT members include systems and specialty engineers, logistics personnel, lawyers, contract specialists, and representatives from the organization responsible for the system’s operations and maintenance. IPTs are responsible for systems research, development, acquisition, and installation. Product teams within these IPTs are responsible for individual ATC system acquisitions or projects. For example, the en route IPT has product teams for the Display Channel Complex Rehost, the Display System Replacement, the Voice Switching and Control System, and several other en route systems. Air Traffic Control: Status of FAA’s Modernization Program (GAO/RCED-94-167FS, Apr. 15, 1994). The second major organization involved with ATC systems is ATS. ATS is responsible for directing, coordinating, controlling, and ensuring the safe and efficient utilization of the national airspace system. Organizations within ATS are responsible for planning, operating, and maintaining ATC systems. Responsibility for managing ATC systems is transferred from the IPT to ATS once the systems have been installed and are operational. See figure 1.2 for a visual summary of the ATC modernization and maintenance management structure. During our review, we assessed six major modernization projects. These projects were the Voice Switching and Control System (VSCS), which provides air-to-ground voice communication services and ground-to-ground voice communication services between controllers, other ATC personnel, and others at the same and different en route centers and other ATC facilities; Standard Terminal Automation Replacement System (STARS), which is to replace critical air traffic control computers with new traffic computers, displays, and software in TRACON facilities and towers; Display System Replacement (DSR), which is to replace air traffic controllers’ existing display-related systems in each of the en route centers; Airport Surveillance Radar-9 (ASR-9), which monitors aircraft movement and position within a radius of 60 miles of an airport terminal; Wide Area Augmentation System (WAAS) for the Global Positioning System (GPS), which is to provide augmentations to the Department of Defense’s GPS in order to allow improved navigation on domestic and oceanic air routes; and Display Channel Complex Rehost (DCCR), which is an interim replacement to the mainframe computer system that processes radar and other data into displayable images on controllers’ screens. The FAA organization responsible for estimating costs on the projects we assessed varied depending on the project’s stage in its life cycle. According to FAA acquisition rules in place when the latest life cycle cost estimate for the projects we assessed were developed, FAA’s Investment Analysis and Operations Research organization (ASD-400), which is one of the groups within ARA that handles cross-cutting management functions, developed life cycle cost estimates early in the project life cycle—both when mission needs were evaluated and again when evaluating the costs, benefits, and feasibility of project alternatives. Once the decision was made to invest in a given alternative, the IPTs assumed responsibility for updating the cost estimates. However, IPTs could also choose to develop their own cost estimates prior to the investment decision point, rather than have ASD prepare them. Of the six projects we reviewed, four used ASD- at some point in the project’s life cycle. The other two did not. In addition, some project managers updated the acquisition phase portion of the life cycle cost estimates periodically between the times that the full life cycle cost estimates were revised. FAA’s organizational responsibilities for cost estimating have recently changed. In October 1995, the Congress instructed FAA to develop and implement a new acquisition management system, which would not be subject to various existing acquisition laws. On November 15, 1995, the President signed this bill into law. FAA began implementing this new acquisition management system in April 1996 with the issuance of broad policies, guiding principles, and internal procedures. While not yet fully implemented, these specify that two key decisions be made at the corporate level by the Joint Resources Council (JRC), a newly formed body comprised of the associate administrators for operations and acquisitions as well as officials responsible for acquisitions, financial services, and legal counsel. These decisions are (1) whether mission needs warrant entry into investment analysis and (2) whether to invest in the project at the conclusion of investment analysis. FAA identified the latter as the most important decision in the life cycle acquisition management process. Accordingly, FAA plans to establish a “center of excellence” for investment analysis with experts in cost estimating, risk assessment, market analysis, and affordability analysis. Under this new scenario, investment analysis will be conducted as a joint enterprise by the FAA organization sponsoring the system (i.e., ATS) and the FAA organization responsible for acquiring it (i.e., ARA). The purpose is threefold: to ensure that (1) users buy into the solution, (2) acquisition specialists have a voice in the cost, schedule, and performance baselines they will have to live with, and (3) the investment analysis staff understands the concerns of the operations and acquisitions organizations. Under this approach, the sponsoring organization, with technical support from the investment analysis staff, develops and approves its requirements in the form of a Requirements Document. The investment analysis staff leads the effort to identify and analyze candidate solutions through market surveys, alternatives analysis, and affordability assessments, with support from the sponsoring organization and the ARA IPT responsible for acquiring it. This effort culminates in the Investment Analysis Report, which is to contain comprehensive quantitative data for each alternative, such as life cycle cost, cost-benefit ratios, and risk. The IPTs use this information to generate cost and schedule baselines for each alternative in the form of an Acquisition Program Baseline. At the investment decision point, the JRC decides on an alternative; baselines the project’s requirements, costs, schedules, performance, and benefits; and commits the agency to full funding of the program. Thereafter, any changes to these baselines must be approved by the JRC. In fact, no funding may be committed or obligated that would exceed the program cost baseline until the increase is approved by the JRC and included in agency plans and budgets. The success of FAA’s new investment analysis and decision-making approach depends on many factors, not the least of which is the reliability of ATC project cost information discussed in this report. Reliable cost estimates and monitoring of actual costs are essential to informed investment decision-making throughout the development and maintenance of capital items, such as ATC systems. In the case of FAA, they are cornerstones to its aforementioned investment analysis and decision-making processes. In 1994, we reported on how leading organizations improved mission performance through information technology. Among other things, we reported that successful organizations manage information system projects as investments, and continually assess the quality of projects’ estimated costs and carefully monitor projects’ actual costs against these estimates. Furthering this initiative, OMB’s 1995 guidance, Evaluating Information Technology Investments, calls for selecting information technology project investments on the basis of cost, benefit, risk, and return; controlling projects by comparing ongoing actual results being achieved with projected costs, benefits, and risks; and, finally, evaluating projects after they have been implemented to determine actual cost, benefits, risks, and returns, and modifying the selection and control processes based on lessons learned. This guidance has since been embodied in (1) the Clinger-Cohen Act of 1996, which requires the selection of information technology investments on the basis of competing projects’ estimated costs, benefits, and risks, and (2) the Chief Financial Officers (CFO) Act of 1990, which requires federal agencies to maintain integrated accounting and financial management systems that permit systematic and reliable measurement of projects’ cost and performance. Additionally, OMB Circular A-11, Part 3, requires agencies to request full up-front budget authority for all ongoing and new fixed assets (including information technology) in their fiscal year 1998 budget submission. This circular also requires a fixed asset plan and justification for major acquisitions, including, among other items, an analysis of full life cycle costs and an estimate of the risk and uncertainty in meeting project goals. The Software Engineering Institute’s (SEI) Capability Maturity Model (CMM), the standard used by government and industry to determine the maturity of an organizations’ software development processes, also highlights the need for good estimates and good estimating processes.Three of the CMM’s key process areas for level 2 (repeatable) process maturity are project planning, project tracking, and subcontract management. These process areas must have reliable estimates for size, effort, schedule, and cost if they are to be performed successfully. The CMM further requires that the procedures that implement these key process areas be documented. To improve the state of practice for software cost and schedule estimating, SEI developed and published (1) criteria for establishing sound estimating processes and (2) a guide for managers to use in validating an individual project’s estimate. These documents, in effect, describe “best practices” used in industry and government for estimating software costs and schedules. However, SEI found that the “best practices” are equally applicable to hardware and integrated systems projects, and therefore allows for substituting the word “system” for “software” throughout its guides and checklists. SEI also noted that while the criteria target the acquisition/development phase of a project’s life cycle, the concepts are also applicable to other phases of the life cycle. According to SEI’s Checklists and Criteria for Evaluating the Cost and Schedule Estimating Capabilities of Software Organizations, in order to have sound estimating processes, an organization should have six attributes, or requisites, institutionally embedded in its policies and procedures. These include (1) a corporate memory, or historical database(s), for cataloging cost estimates, revisions, reasons for revisions, actuals, and other contextual information, (2) structured processes for estimating software size and the amount and complexity of existing software that can be reused, (3) cost models calibrated/tuned to reflect demonstrated accomplishments on similar past projects, (4) audit trails that record and explain values used as cost model inputs, (5) processes for dealing with externally imposed cost or schedule constraints in order to ensure the integrity of the estimating process, and (6) data collection and feedback processes that foster capturing and correctly interpreting data from work performed. SEI provides detailed checklists for assessing an organization’s satisfaction of each requisite. These same six requisites are interwoven through seven questions SEI poses in A Manager’s Checklist for Validating Software Cost and Schedule Estimates. The seven questions are (1) Are the objectives of the estimate clear and correct? (2) Has the task been appropriately sized? (3) Are the estimated cost and schedule consistent with demonstrated accomplishments on past projects? (4) Have the factors that affect the estimate been identified and explained? (5) Have steps been taken to ensure the integrity of the estimating process? (6) Is the estimate based on reliable evidence of the organization’s past performance? and (7) Has the situation remained unchanged since the estimate was prepared? Once again, SEI provides detailed checklists for addressing these seven questions. These SEI publications are further discussed in chapter 2 and in appendixes I, II, and III. Requirements for agency cost accounting have been evolving for decades. In a 1985 report, the Comptroller General presented a framework for strengthening agencies’ financial management structure. This report called for the integration of accounting and budgeting systems to better monitor progress against estimates and to better estimate future program costs. More specifically, it states that actual costs must be maintained and monitored in order to effectively manage programs and control costs. This approach was embodied in the CFO Act of 1990, which requires agencies to develop and maintain integrated accounting and financial management systems which provide for (1) the development and reporting of cost information and (2) the systematic measurement of performance. The objectives of our review were to determine if (1) FAA’s project cost estimates are based on good estimating policies and practices and (2) the actual costs of ATC modernization projects are being properly accumulated. To determine if FAA’s estimates were based on good policies and practices, we researched current literature and interviewed project estimating experts to identify the key components of good cost estimating practices; obtained and analyzed FAA’s policies and practices for estimating costs to determine what criteria (directives, orders, instructions, and implementing procedures), if any, FAA has in place to guide managers in developing projects’ cost estimates; assessed FAA’s cost estimating policies, practices, tools, and techniques to determine if they incorporate the key components of good cost and schedule estimating practices advocated by SEI and other experts; and selected FAA’s five largest (based on latest life cycle cost estimates) ongoing ATC modernization projects and one project that was the subject of another GAO review, and interviewed project managers and assessed project documentation on these six projects to determine (1) how the current life cycle cost baseline was estimated and (2) how this estimating approach compared to the SEI project-level questions. To do this, we compared each projects’ documentation to SEI’s detailed checklists for each question; determined if the project satisfied, partially satisfied, or did not satisfy each checklist item and assigned points accordingly (1, .5, or 0 points, respectively); and then summed the points and presented them as a portion of the total points available (e.g., 4/10). Because the focus of this effort is to assess FAA’s cost estimating processes and not to validate the accuracy or completeness of the estimates, we did not evaluate the quality of the estimates. To determine whether the actual costs of ATC modernization projects are being properly accumulated, we obtained and reviewed (1) selected reports and testimonies issued by GAO, the Department of Transportation’s Office of the Inspector General, and the Defense Contract Auditing Agency, (2) related policies and procedures issued by the Department of Transportation, (3) applicable accounting standards and guidance, and (4) applicable OMB directives; reviewed FAA’s policies and procedures governing ATC financial management and interviewed program managers and financial accounting staff to determine (1) their roles and responsibilities for recording and managing ATC cost information and (2) the financial processes used to accumulate and record ATC costs; and reviewed available information for the five largest (based on life cycle cost estimates) ongoing ATC projects and determined if costs are properly accumulated by (1) obtaining available financial information on the projects and identifying the cost elements included and excluded and (2) assessing reconciliation procedures among varying sources of information. We requested comments on a draft of this product from the Secretary of Transportation. On December 10, 1996, we obtained oral comments from Transportation and FAA officials, including representatives from the Office of the Secretary of Transportation, the Executive Assistant to the FAA Chief Financial Officer, the FAA Manager of the Cost Accounting System Division, and the FAA Program Director for Investment Analysis and Operations Research. Their comments are presented and addressed in chapters 2 and 3 of this report. We performed our work at the Federal Aviation Administration in Washington, D.C., and the Software Engineering Institute in Pittsburgh, Pennsylvania, between February and December 1996. Our work was performed in accordance with generally accepted government auditing standards. Reliable estimates of projects’ expected costs are essential to decide among alternative investments. According to SEI, consistently producing them requires defined institutional processes for deriving estimates, archiving them, and measuring actual performance against these estimates. FAA’s cost estimating processes used on its ATC modernization projects do not meet SEI criteria. These weaknesses are exacerbated by FAA’s practice of presenting cost estimates as precise, point estimates. By doing so, FAA obscures the estimates’ inherent uncertainty and may mislead decisionmakers. FAA’s institutional processes for estimating ATC projects’ costs do not fully satisfy any of the six SEI requisites. According to SEI, all six must be satisfied to produce credible estimates. The six requisites are described below, along with our analysis showing that FAA’s institutional policies and practices fail to meet them. (See appendixes I and II for more detail.) We shared this analysis with FAA’s cost estimating authority, who agreed that FAA’s policies and practices do not meet SEI’s process requirements. According to SEI, estimating organizations should have a process for organizing and retaining project cost and schedule information in a historical database, and for using it as an integral part of the estimating process. This database should contain detailed information on projects’ original estimates, revised estimates, reasons for revisions, and actual performance against the estimates. It should also contain descriptive contextual information that enables people to understand and correctly interpret the data in the database. FAA has no institutional corporate memory, or historical database, on ATC projects’ cost and schedule estimates and performance. While FAA has a number of stand-alone databases within different groups, none provide a complete picture of estimates, assumptions that make up the estimates, revisions, and actual performance on projects. For example, the Cost Benefit Analysis System (CBAS) is a database that contains some information on projects’ cost estimates and planned budget levels. However, it contains no information on how and why estimates are revised, why budget streams differ from estimates, or what projects actually cost. Further, what limited information is available on actual cost performance is not an integral part of the project estimating process, and the information on cost estimates that is retained by the central estimating organization is not readily available to the project personnel that are responsible for updating estimates after the initial investment decision has been made. According to SEI, estimating organizations should follow well-defined, structured processes for estimating product size and the amount and complexity of existing software that can be reused. It should be clear what is included and what is excluded from size estimates, and new estimates should be checked by comparing them to measured sizes of existing software products. FAA has no institutional process for estimating product size and the amount of existing software that can be reused. Each project manager decides to estimate software size and reuse as he or she chooses. Among ATC projects, these processes range from simple lines-of-code estimates using an individual’s personal knowledge of similar systems’ sizes to sophisticated analysis based on project-unique variables. For example, the original software size estimates for the Standard Terminal Automation Replacement System (STARS) was a rough approximation based on the number of lines of code in a predecessor system. A more recent, though not yet official, estimating effort established size estimates for each desired STARS software function and then compared the estimates to known sizes of similar functions on two other completed projects. This effort also used a checklist to ensure that no desired functions were overlooked and accounted for differences between STARS and its predecessor system. The latter, more careful size estimate is over three times the original estimate for new and modified software. According to SEI, estimating organizations should have documented processes for extrapolating from past experiences. These processes should include the use of cost models that are calibrated and validated on the basis of actual experience. Further, differences between cost models’ outputs should be analyzed and explained. FAA’s estimating guidance recommends 10 different cost models as acceptable estimating tools; however, there is no requirement for project estimators to use these models or to collect and use past experience to calibrate these models. As a result, projects’ use and calibration of the models is inconsistent. For example, estimators on three of the six projects we assessed did not use cost models in their estimates whereas Display System Replacement (DSR) officials not only used four cost models, but also calibrated them to past experiences on a predecessor system. According to SEI, estimating organizations should prepare adequate audit trails of inputs to the estimates, including parameters used in cost models and their rationales. FAA estimating guidance requires that cost estimates be documented and reproducible. However, the degree of documentation and the extent of any accompanying explanation is left to the discretion of each project’s estimators. As a result, the detail and quality of the audit trails on ATC projects is inconsistent. For example, only handwritten notes document how the Display Channel Complex Rehost (DCCR) estimate was derived. On the other hand, the DSR estimate and the ongoing STARS estimating effort are supported by volumes of documentation delineating the assumptions, processes, and model inputs used. According to SEI, organizations should ensure that the effects of dictated, or externally imposed, costs or schedules are determined and explicitly presented to management. Estimators should document and managers should approve any changes made to model parameters to accommodate dictated costs or schedules, the rationale for making the changes, and the effect of the changes on other factors—cost, schedule, or risk. FAA has no institutional process for ensuring integrity in dealing with dictated costs or schedules. As a result, each project manager determines his or her own response to externally imposed constraints. Only one of the projects we assessed acknowledged working under an externally imposed schedule. STARS project officials are preparing a cost estimate that shows the cost of meeting the compressed schedule. However, without an institutional policy requiring such action, there is no assurance that all dictated constraints on all ATC projects will be handled so effectively. According to SEI, organizations should have a defined process for gathering information on ongoing and completed projects (including original estimates, revised estimates, and post-mortem assessments) and entering this information into the historical database. FAA has no institutional process for gathering information on completed projects and entering it in a historical database. Post-mortem reviews are performed rarely, and only on an ad hoc basis. Instead, each project manager determines the type and amount of information retained. Because FAA does not have well-defined, institutional processes for estimating information technology projects’ costs, the approaches used and the reliability of the estimates are inconsistent. To assist management in assessing the credibility of a given project cost estimate, SEI developed seven questions which must be answered. The seven questions are (1) Are the objectives of the estimate clear and correct? (2) Has the task been appropriately sized? (3) Are the estimated cost and schedule consistent with demonstrated accomplishments on other projects? (4) Have the factors that affect the estimate been identified and explained? (5) Have steps been taken to ensure the integrity of the estimating process? (6) Is the estimate based on reliable evidence of the organization’s past performance? and (7) Has the situation remained unchanged since the estimate was prepared? SEI developed a detailed checklist to assist in addressing each question. (See appendix III for more information on SEI’s questions and checklists.) We applied this checklist to the most recently baselined life cycle cost estimate on six ATC projects (the five ongoing projects with the largest life cycle cost estimates plus one ongoing project that was the subject of another GAO review). These projects are the Voice Switching and Control System (VSCS), the Standard Terminal Automation Replacement System (STARS), the Display System Replacement (DSR), the Airport Surveillance Radar-9 (ASR-), the Wide Area Augmentation System for the Global Positioning System (WAAS), and the Display Channel Complex Rehost (DCCR). Of the six systems, two (ASR-$857 million. However, there is no documentation describing how ASR-software size estimates were determined or what assumptions were used to estimate costs from these size estimates. As a result, there is no analytical way to determine the credibility of the original estimate or estimating approach. Moreover, without documentation, FAA cannot systematically use its ASR-In addition, project estimators derived official WAAS life cycle cost estimates based on experiences with the National Satellite Test Bed, a precursor to the WAAS development. However, little documentation exists on how the original WAAS size estimates were derived and what assumptions and parameters were used to estimate costs. Project officials are currently updating estimates using a published sizing methodology and the Software Life Cycle Intermediate Model (SLIM) cost model, and stated that they believe this is a much more rigorous approach than prior estimating efforts. However, this more structured approach also falls short of SEI requirements. For example, the project software estimator stated that one cost model parameter (input variable) is a ranking of the sophistication of the developer’s software environment. The estimator scored the WAAS contractor as very high on this parameter and showed us where this information was captured in the SLIM database. However, the estimator went on to explain that he chose a high score based on the contractor’s rating as a level 3 organization on SEI’s CMM and the fact that they have and use Computer-Aided Software Engineering (CASE) tools. This explanatory information was not recorded in the SLIM database or in any other documentation, and thus is not available to anyone trying to understand or validate the estimate or learn from this estimating experience. The current official estimate for the last of the six systems, DSR, was derived using an approach that partially or fully satisfied six of the seven SEI reliability questions. Examples of DSR’s adherence to SEI guidance include (1) DSR’s software size estimate was developed by identifying needed software functions, and then determining the amount of new code needed and reusable code available for each of these software functions, (2) DSR estimators calibrated cost estimating models to past experience on DSR’s predecessor system, the Initial Sector Suite System (ISSS), and (3) estimators used templates to ensure that key cost factors would not be overlooked. However, DSR estimators did not record the rationales for the parameters they used in their cost models or explain differences among cost model results. (See appendix IV for further information on the results of our assessment.) Software and systems development experts agree that early project estimates are by definition imprecise, and that this inherent imprecision decreases during the project’s life cycle as more information becomes known about the system. Some have described this phenomenon as a “cone of uncertainty” that is widest early in the life cycle and narrows over time as more becomes defined and known about the project. (See figure 2.1.) These experts emphasize that each cost estimate should include an indication of its degree of uncertainty, possibly as an estimated range or qualified by some factor of confidence. For example, a cost estimate of $1 million could be presented as a range from $750,000 to $1.25 million or as $1 million with a confidence level of 90 percent, indicating that there is a 10-percent chance that costs will exceed this estimate. FAA does not reveal its estimates’ degree of uncertainty to managers involved in investment decisions. Instead, FAA presents its projects’ cost estimates as unqualified point estimates, thereby suggesting an element of precision that does not exist. A budget official stated that FAA presents project cost estimates as such because “this is the way OMB and Congress want to see it.” Further, he stated that in today’s environment of lean budgets, the low-end of the estimate range is all that FAA can afford and all that is salable, and therefore, this is what they present. By presenting a point estimate instead of a range of estimates or a realistically qualified estimate, FAA is not fully disclosing all relevant information about the projects’ potential costs and inherent risk. The result is uninformed, and thus potentially unwise, investment decisions. During the course of our review, FAA organizations initiated several efforts to improve their processes for estimating and archiving cost information. However, these efforts are still relatively new and have not yet been institutionalized. We did not evaluate any of these efforts. FAA’s Office of Investment Analysis and Operations Research (ASD-The same organization is also testing the Cost and Performance Management System (CPMS) to track operations and maintenance (O&M) costs by project for the first time. An airways facilities manager stated that CPMS will eventually be integrated with ACEIT to allow estimators to use actual project cost information in estimating new projects’ O&M costs. Multimillion dollar, and even billion-dollar, investment decisions on air traffic control modernization projects are being made without reliable information on the projects’ estimated and actual costs. FAA does not have well-defined, structured estimating processes that are rigorously followed, and does not disclose the estimates’ inherent uncertainty. Without better estimates of cost, FAA’s new investment analysis and decision-making processes are unlikely to be effective. We recommend that the Secretary of Transportation direct the FAA Administrator to institutionalize defined processes for estimating ATC projects’ costs. At a minimum, these processes should include the following SEI requisites, each of which are described in more detail in this report: a corporate memory (or historical database), which includes cost and schedule estimates, revisions, reasons for revisions, actuals, and relevant contextual information; structured approaches for estimating software size and the amount and complexity of existing software that can be reused; cost models calibrated/tuned to reflect demonstrated accomplishments on audit trails that record and explain all values used as cost model inputs; processes for dealing with externally imposed cost or schedule constraints in order to ensure the integrity of the estimating process; and data collection and feedback processes that foster capturing and correctly interpreting data from work performed. We also recommend that the Secretary direct the Administrator to immediately begin disclosing the inherent uncertainty and range of imprecision in all ATC projects’ official cost estimates presented to executive oversight agencies or the Congress. Additionally, we recommend that the Secretary direct the Administrator to report to the Secretary and FAA’s authorizing and appropriation committees on progress being made on these recommendations as part of the agency’s fiscal year 1999 budget submission. DOT and FAA officials provided oral comments on a draft of this report. These officials concurred with the report’s findings, conclusions, and recommendations on cost estimating. They also stated that this report will be useful as FAA strives to improve its cost estimating capabilities. Agencies are required to maintain adequate systems of accounting and internal controls to provide managers and other decisionmakers with reliable financial information to effectively measure performance and make sound investment decisions. In the case of the ATC modernization program, FAA is not satisfying this requirement. Specifically, ATC project information does not include all relevant project costs, including internal personnel compensation, benefits, and travel (PCB&T) costs, because FAA lacks a cost accounting system to accumulate and allocate these costs to specific projects. FAA’s internal accounting policies include a requirement for a cost accounting system; however, this policy has not been implemented. As a result, project managers are unable to measure actual costs and their ability to make informed decisions is impaired. Moreover, complete project information is not available to feed back into, and thereby improve, future project cost estimates. The Federal Managers’ Financial Integrity Act of 1982 (FMFIA) requires that agency systems of internal accounting and administrative control comply with internal control standards prescribed by the Comptroller General and provide reasonable assurance that among other things, obligations and costs comply with applicable law and revenues and expenditures applicable to agency operations are recorded and accounted for properly. FMFIA also requires that agency heads issue an annual report, transmitted to the President and the Congress, detailing whether their internal control systems fully comply with the act’s requirements, including the identification of material systems weaknesses and plans for corrective actions. The Chief Financial Officers (CFO) Act of 1990 requires agencies to develop and maintain integrated agency accounting and financial management systems that comply with applicable accounting principles, standards, and requirements, including the preparation of complete, reliable, consistent, uniform, and timely information that is responsive to agency management’s financial information needs; the development and reporting of cost information; the integration of accounting and budgeting information; and the systematic measurement of performance. Recently, the Statement of Federal Financial Accounting Standards no. (SFFAS Federal Government, was issued, effective for fiscal periods beginning after September 30, 1996. These standards require a reporting entity to accumulate and report the full cost of its activities regularly for management information purposes. The full cost of a project is described as the sum of (1) the costs of resources consumed by the project that directly or indirectly contribute to the output and (2) the costs of identifiable supporting services provided by other organizations within the reporting entity and by other reporting entities. These standards also require that the full costs of resources be assigned to outputs through costing methodologies or cost finding techniques that are most appropriate to the organization’s operating environment and that they be followed consistently. While SFFAS has only been effective for a short time, and therefore was not applicable during the period of our review, it provides cost accounting criteria which are now required to be implemented by all agencies. Additionally, the 104th Congress passed the Federal Financial Management Improvement Act of 1996 which, among other provisions, requires agencies to comply with federal accounting standards. FAA financial systems supporting the ATC modernization program do not accumulate all project costs, and thus, managers do not receive all relevant financial information needed to effectively manage their projects. Of the five projects whose financial information we reviewed, none of the project managers could provide the total of all costs incurred from the project’s inception. Instead, they provided: contract numbers for their respective projects so that cost data for each contract could be extracted from FAA’s Departmental Accounting and Financial Information System (DAFIS) and aggregated to provide total contract costs. However, these contract costs could be understated because project officials could not verify that they provided us all applicable contract numbers. incomplete project costs. These costs were not complete because they did not include (1) Personnel Compensation, Benefits, and Travel (PCB&T) costs associated with the Facilities and Equipment (F&E) appropriations account and (2) all costs paid out of the Operations and Maintenance (O&M) appropriations account. PCB&T costs for the F&E appropriation include internal FAA costs that are related to project design, contracting, and contractor oversight. O&M costs are costs associated with the administration, operation, repair, and maintenance of operating FAA facilities and are generally the single largest component of an information systems life cycle cost. In 1995, total PCB&T costs were approximately $2 billion for all ATC projects. PCB&T and O&M costs are accounted for separately and are not allocated to individual ATC projects. Because these costs are not allocated to specific projects, full life cycle costs of projects cannot be determined and may be significantly understated. An additional limitation is that FAA does not carry over and report the costs associated with terminated or redirected projects as part of the successor projects’ costs, even though successor projects reuse parts of the predecessors’ components. As a result, the full costs of “restructured” ATC projects are understated. Accounting for the full costs of projects requires that the costs related to usable portions of terminated or redirected projects be included in the costs of the ongoing projects. In addition, full project cost accounting information would require that costs of unused parts of terminated or redirected projects be separately identifiable within the “corporate memory.” For example, one of the projects we reviewed, the Display System Replacement (DSR), is a follow-on to an earlier terminated project known as the Initial Sector Suite System (ISSS). According to FAA officials, DSR software and hardware salvaged from ISSS accounts for about 19 percent of DSR’s cost. However, these costs are not included in DSR’s accumulated and reported costs because, according to an FAA official, these costs are considered “sunk costs.” The term “sunk costs” is generally used to describe costs that have been incurred in the past and have no relevance to future decision-making. However, we believe these costs should be considered a part of a project’s full cost since they would be instructive in reliably estimating costs of similar systems. In addition, information about the amount of costs associated with the unused portions of terminated projects should be retained in the “corporate memory” to provide a full picture of the real cost of development projects. A managerial cost accounting system supports the collection, measurement, accumulation, analysis, interpretation, and communication of cost information to allow users to determine the cost of specific programs and activities and the composition of, and changes in, these costs. As mentioned above, the CFO Act requires agencies to develop and maintain a cost accounting capability that captures both budgetary and financial accounting data and generates performance measures. FAA’s internal policies require a cost accounting system and state that the cost accounting system should be integrated with the general accounting system. However, these policies have not been implemented; thus FAA project managers do not have the capability to fully account for costs being incurred for the ATC modernization program. Instead of a cost accounting system, several financial management systems account for specific financial accounting and budgetary data, but these systems are not integrated and they do not provide the full cost information necessary for investment decisions. These systems include the following. Departmental Accounting and Financial Information System (DAFIS): This system is the Department of Transportation’s core accounting system. However, this system is not a cost accounting system because not all cost information is captured by project. In addition, the Department’s Office of the Inspector General (OIG) reported that DAFIS data is unreliable and inaccurate. For example, the OIG reported that major balance discrepancies existed between DAFIS accounts and their supporting details. Further, in April 1996, a cost accounting systems consultant reported that DAFIS does not provide all levels of management with timely, accurate, relevant, and meaningful information. Financial Management System (FMS): This FAA system is used by ATC project managers to establish quarterly obligation plans under the Facilities and Equipment (F&E) appropriation and to track actual obligations against these plans. However, FMS is not a cost accounting system because it (1) contains only obligation data and (2) does not contain all relevant cost data, such as those for PCB&T. Cost of Performance System (COPS): This system is used by FAA organizations responsible for operating and maintaining ATC systems to allocate aggregated O&M obligation data to individual cost centers (that is, field maintenance organizations). COPS is not a cost accounting system because it does not contain information on actual costs. In addition, COPS’ O&M cost data is not allocated to and accumulated for individual projects. Research, Engineering and Development, Monitoring, Analysis, and Control System (REDMACS): This system is used by FAA organizations responsible for ATC research and development projects to establish quarterly obligation plans for the Research, Engineering, and Development (RE&D) appropriation and to track actual obligations against these plans. REDMACS is not a cost accounting system because it does not contain information on actual costs. Project managers have also developed their own unique systems to account for F&E and RE&D obligations. These “cuff” systems range from spreadsheets to more sophisticated financial management systems, but do not include O&M costs and generally do not capture actual costs. None of the systems listed above, either individually or combined, constitute a cost accounting system because they do not provide for accumulation and monitoring of total costs. In recognition of the need for accurate and reliable cost information, FAA’s Associate Administrator for Administration established a new Cost Accounting Systems Division in August 1996 and engaged a consultant to assist the agency in defining its cost accounting requirements and in designing and implementing a system to meet these requirements. According to the Manager of the Cost Accounting System Division, this system will use data from several financial management systems, including DAFIS, and is planned to be in place by October 1, 1997. Reliable project cost information that is both complete and accurate is needed to estimate future project costs, make sound investment decisions, and effectively manage projects. FAA does not have a cost accounting system capable of reliably accumulating full project cost information, and therefore cannot reliably estimate future project costs, ensure that investment decisions are sound, or manage projects effectively. Without better cost information, FAA’s new investment analysis and decision-making processes are unlikely to be effective. In light of FAA’s weaknesses in accounting for and reporting ATC project costs, we recommend that the Secretary of Transportation direct the FAA Administrator to acquire or develop and implement a managerial cost accounting capability that will satisfy the requirements of SFFAS We further recommend that the Secretary report FAA’s lack of a cost accounting capability for its ATC modernization as a material internal control weakness in the Department’s fiscal year 1996 FMFIA report and in subsequent annual FMFIA reports until the problem is corrected. Also, we recommend that the Secretary direct the Administrator to report to the Secretary and FAA’s authorizing and appropriation committees on progress being made on these recommendations as part of the agency’s fiscal year 1999 budget submission. In providing oral comments on a draft of this report, DOT and FAA officials stated that since they are in the process of acquiring a cost accounting system and plan to have an “initial operating capability” by October 1, 1997, they do not agree with our recommendations and consider them unnecessary. While we acknowledge and support FAA’s cost accounting organizational and system initiatives, it is important to note that its cost accounting system acquisition is still very early in its acquisition life cycle and much remains to be accomplished before FAA can have the cost accounting capability we recommend. In fact, FAA has yet to develop detailed functional requirements for this system, thereby precluding our analysis at this time of whether its plans will satisfy our recommendation. Additionally, until FAA implements our recommendation and improves the accuracy of underlying data in feeder systems like DAFIS, it will continue to lack adequate cost information needed to effectively manage its ATC system acquisitions. Disclosure of such a management control weakness is one of the objectives of the FMFIA, and therefore we continue to believe that FAA should report its lack of a cost accounting system as a material weakness in its FMFIA reports until the problem is corrected.
GAO reviewed the reliability of the cost information critical to capital investment decisionmaking on air traffic control (ATC) projects, focusing on the Federal Aviation Administration's (FAA) processes for estimating what projects will cost and the related accounting for actual project costs. GAO found that: (1) FAA's ATC modernization program's cost estimating processes do not satisfy recognized estimating requisites, and its cost accounting practices do not provide for proper accumulation of actual project costs; (2) the result is an absence of reliable project cost and financial information that the Congress has legislatively specified and that leading public-sector and private-sector organizations point to as essential to making fully informed investment decisions among competing ATC projects; (3) without this information, the likelihood of poor ATC investment decisions is increased, not only when a project is initiated but also throughout its life cycle; (4) with respect to cost estimating, FAA fails to meet five of the six process requisites that the Software Engineering Institute (SEI) says should be institutionally entrenched and consistently used for information technology projects; (5) in the absence of such institutional policies to guide ATC project cost estimating, FAA has adopted a cost estimating process that allows each ATC project to approach cost estimating in whatever manner its estimators choose; (6) the result is inconsistency in the rigor and discipline with which ATC project cost estimates are derived, which in turn means estimates of varying degrees of reliability; (7) when comparing the approaches that six ATC projects used to derive their current official life cycle cost estimates to SEI's project-specific criteria, GAO found that two were too poorly documented to permit any comparative analysis, while none of the remaining four satisfied all of the criteria SEI associates with highly credible estimates; (8) compounding these estimating process weaknesses is FAA's practice of presenting cost estimates as precise, point estimates; (9) by doing so, FAA fails to disclose the estimates' inherent uncertainty and risks, thus further limiting the estimates' decisionmaking value and credibility; (10) with respect to cost accounting, FAA is not accumulating all ATC project costs, and FAA does not have a cost accounting system for capturing and reporting the full cost of its ATC projects; (11) instead, FAA decisionmakers use accounting and financial management systems that omit relevant project costs, such as those associated with FAA project management; and (12) the result is that FAA cannot reliably measure the ATC projects' actual cost performance against established baselines, and cannot reliably use information relating to actual cost experiences to improve future cost estimating efforts.
After the terrorist attacks of September 11, 2001, the President signed the Aviation and Transportation Security Act (ATSA) into law on November 19, 2001, with the primary goal of strengthening the security of the nation’s civil aviation system. ATSA created TSA as the agency with responsibility for securing all modes of transportation, including civil aviation. As part of this responsibility, TSA performs or oversees the performance of security operations at the nation’s nearly 440 commercial (i.e., TSA-regulated) airports, including passenger and checked baggage screening operations. FSDs are TSA officials responsible for overseeing TSA security activities, including passenger and checked baggage screening, at one or more commercial airports. TSA classifies commercial airports in the United States into one of five security risk categories (X, I, II, III, and IV) based on various factors, such as the total number of takeoffs and landings annually, and other special security considerations. In general, category X airports have the largest number of passenger boardings and category IV airports have the smallest. TSA periodically reviews airports in each category and, if appropriate, updates airport categorizations to reflect current operations. Figure 1 shows the number of commercial airports by airport security category as of July 2015. TSA uses a multilayered security strategy aimed to enhance aviation security. Within those layers of security, TSA’s airport passenger checkpoint screening system includes, among other things, (1) screening personnel (i.e., TSOs); (2) SOPs that guide screening processes conducted by TSOs; and (3) technology, such as advanced imaging technology systems (often referred to as body scanners) or walk-through metal detectors, used to conduct screening of passengers. To carry out passenger and checked baggage screening operations, TSA employs TSOs at the vast majority of the nation’s commercial airports. There are several levels of screening officers deployed at the passenger checkpoint: Transportation Security Officer (TSO): Performs the majority of security functions to screen people and property to mitigate threats. Screening may include pat downs, search of property, and operating technology including walk-through metal detectors, X-ray machines, and explosives detection equipment, among other things. Lead Transportation Security Officer (LTSO): Leads a staff of TSOs, including distributing and adjusting workload and tasks among employees and oversees the security screening team on a daily basis. Implements security procedures and provides coaching and guidance to TSOs in performing screening duties, among other things. LTSOs also perform screening functions along with added responsibilities, such as resolving alarms and supervising screening locations when a supervisor is not available. Supervisory Transportation Security Officer (STSO): Oversees screening checkpoints and/or baggage screening, supervises LTSOs and TSOs in performance of security screening and ensures all required screening is performed in accordance with SOPs. Reviews and evaluates work and performance of LTSOs and TSOs, approves leave, and recommends corrective or disciplinary actions, among other things. STSOs also perform screening functions and resolve passenger alarms. Transportation Security Manager (TSM): Coordinates and facilitates TSA security activities and manage one or more programs as assigned by the Federal Security Director. A TSM assigned to oversee screening checkpoints manages security activities, including recognizing and correcting improper use or application of equipment or screening procedures, monitors screening operations, and implements changes to enhance security and efficiency at screening locations. TSOs inspect individuals and property as part of the passenger screening process to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other prohibited items on board an aircraft or into the airport sterile area—in general, an area of an airport that provides passengers access to boarding aircraft and to which access is controlled through the screening of persons and property. Ordinarily, screening of accessible property at the screening checkpoint begins when an individual places accessible property on the x-ray conveyor belt or hands accessible property to TSA personnel. As shown in figure 2, TSOs then review images of the property running through the X-ray machine and look for signs of prohibited items. The passengers themselves are typically screened via a walk-through metal detector or an advanced imaging technology machine, and passengers generally have the option to request screening by a pat down if they do not wish to be screened by these technologies. Passengers will also be subject to a pat down if they are screened by the walk-through metal detector or advanced imaging technology system and the equipment alarms (in order to resolve the alarm). TSOs also inspect checked baggage to deter, detect, and prevent the carriage of any unauthorized explosive, incendiary, or weapon onboard an aircraft. Figure 3 shows the general process used to screen checked bags. Checked baggage screening is accomplished through the use of explosives detection systems or explosives trace detection systems, and through the use of alternative means, such as manual searches and canine teams when the explosives detection systems are unavailable. In accordance with ATSA, screeners must complete a minimum of 40 hours of classroom instruction, 60 hours of on-the-job training, and successfully complete an on-the-job training examination before they are certified as security screeners. Screeners can be certified to conduct passenger screening or checked baggage screening, or they may be certified as dual function and can then conduct passenger and checked baggage screening. ATSA also requires that TSA provide operational testing of screening personnel, and any individual who fails an operational test must successfully complete remedial training on that specified security function before returning to duty. In addition, screeners must also undergo an annual proficiency review to ensure they continue to meet all qualifications and standards required to perform a screening function. TSA also requires remedial training for TSOs who fail an annual proficiency review. Covert tests recently conducted by the DHS-OIG highlighted areas of concern for TSA regarding the effectiveness of the passenger screening process. Specifically, the DHS-OIG conducted covert testing to determine the effectiveness of TSA’s Advanced Imaging Technology screening equipment, its related automated target recognition software, and checkpoint screener performance in identifying and resolving potential security threats at airport checkpoints. TSA has responded to the DHS Secretary’s direction regarding the results of the DHS-OIG covert testing, in part, by updating its screening SOPs and retraining TSOs to address the Inspector General’s findings. Also in response to the DHS-OIG findings, TSA has developed new measures of effectiveness that it expects will better emphasize the agency’s goals for improving security effectiveness by focusing the measures on both the screening system and workforce in the areas of readiness and performance. For example, improved workforce measures, now being reported monthly, include those to track TSOs’ progress against training requirements, absences due to injuries or other reasons, and whether they are meeting performance thresholds on various tests of performance and job proficiency. TSO training is comprised of a compendium of courses that includes basic training for initial hires, recurrent training, remedial training, and return-to-duty training. For example, all new hires receive a combination of classroom, hands-on, and web-based training. After TSOs finish their initial new hire training, they receive recurrent and specialized training courses throughout the year that are provided either via classroom instruction or through the TSA Online Learning Center. Recurrent training typically focuses on core screening skills and policies such as X-ray image interpretation, detection techniques, and screening SOPs. TSOs receive remedial training when they have failed an operational or certification test, or if a supervisor identifies a need for further training, among other things. Further, according to TSA, TSOs who are absent from their screening duties for a period of time must undergo some level of “return-to-duty” training based on the amount of time they were absent. For example, TSOs certified in a screening function but who have not performed that function for a period of 15 consecutive days or more are required to complete a return-to-duty training program before being allowed to perform that function independently. Table 1 describes the various types of training TSOs receive. The Office of Training and Development (OTD), within TSA headquarters, oversees the development, delivery, and evaluation of training programs for TSA employees. The National Training Plan (NTP), developed jointly by OTD and the Office of Security Operations, contains the core curriculum for TSOs to meet their annual training, including the classes and hours required for TSOs to complete for the year. TSA headquarters officials implement the NTP to provide ongoing training throughout the year aimed at continually improving screeners’ knowledge, skills, and abilities. However, the responsibility for managing the individual training of TSOs is largely decentralized and it primarily falls on Security Training Instructors at individual airports to train TSOs on parts of the NTP by certain dates throughout the year. Managers in the field track the percentage of the NTP curriculum that TSOs have completed on a monthly basis using the Online Learning Center database. In addition, TSA officials at all 10 airports we contacted stated that they monitor various testing results for their TSOs and observe screening operations at their airports’ checkpoints, to determine any local, specialized training needs their screening force may need—over and above that included in the NTP issued by TSA headquarters. TSA headquarters can also add training requirements throughout the year as needed, such as the recently completed “Mission Essentials—Threat Mitigation” training discussed later. TSA officials we spoke with at airports noted challenges associated with completing not only the required training under the NTP, but also training associated with frequent changes to the screening SOPs for how particular screening practices are to be conducted at the checkpoint. For example, TSA personnel at 8 of the 10 airports stated that it was sometimes difficult to meet the training requirements in the NTP because they did not have the TSO personnel to both staff the checkpoints and get all the required training accomplished. Specifically, TSA officials at larger airports with more passenger throughput, such as category X and category I airports, reported having ongoing challenges balancing training with the operational needs at the checkpoint. In contrast, TSA officials at smaller airports, such as category III and IV airports did not report having this challenge frequently. TSA officials at all three of the category X airports stated they addressed the challenge of meeting training requirements by scheduling large amounts of training during slower travel seasons for their airport so they would not have to spend time training TSOs during peak travel periods. Starting in fiscal year 2014, TSA headquarters began sending the majority of the NTP training requirements for the entire fiscal year out to the field at the beginning of the year—rather than at quarterly intervals throughout the year—allowing airports the flexibility to train TSOs at different rates depending on the operational needs of the airport. TSA training officials at 6 of the 10 airports stated that it is challenging for them to keep the TSOs trained on the frequent changes to screening SOPs. For example, TSA officials from two airports stated that during the delivery of a recent NHTP class, screening SOPs were updated to require an officer to use a handheld metal detector to resolve an alarm arising from a passenger going through an advanced imaging technology scanner. Due to the change happening while the new officers were in the middle of their introductory training, the steps for using the handheld metal detector were not integrated into the NHTP curriculum. As a consequence, after the NHTP course was completed, TSA instructors separately trained the new hires on how to conduct this type of alarm resolution. In addition, TSA officials at 9 airports we spoke with stated that the TSOs used “read and sign” binders to train on some SOP changes, where the officers sign a document stating they read the change to the screening SOPs. However, officers reported that this type of training did not ensure they understood how to implement the change at the screening checkpoint. According to TSA headquarters officials, they plan to conduct more hands-on training to teach screening SOP changes moving forward. Further, TSA personnel at 7 of the 10 airports added that many of the screening SOPs can have room for interpretation, which also prompted officials at 2 of these airports to create new airport-level training to address whether to let particular items through the checkpoint such as bowling balls and other heavy, blunt objects. TSA implemented a TSO re-training program in fiscal year 2015 to retrain its screening workforce in response to findings of the DHS-OIG, which conducted its own covert testing of TSA’s checkpoint operations and technology in the spring of 2015. Specifically, in response to the DHS- OIG findings, TSA provided additional training nationwide to all TSOs— referred to as “Mission Essentials—Threat Mitigation” training. According to TSA documentation, the purpose of this 8-hour classroom training was to provide the opportunity for the workforce to become familiar with the intelligence and threat information that underlies TSA’s use of checkpoint technologies, operational procedures, and the TSO workforce to mitigate threats. TSA officials described the training as covering the “why” behind the equipment and procedures TSA uses to screen passengers and baggage. For example, the training included: instruction on how social engineering techniques may be used in an attempt to defeat TSA risk mitigation procedures, updates on SOP changes for screening certain types of passengers, demonstrations on Improvised Explosive Devices (IED) and how pat downs are used to mitigate the threat, and an overview of checkpoint equipment capabilities and limitations and the role of using screening SOPs and best practices to mitigate gaps caused by equipment limitations. In addition to the 8-hour course provided for screening officers, supervisors were provided additional training on their responsibilities for ensuring the correct implementation of the checkpoint SOPs and how to provide on-the-spot corrections and constructive feedback to officers. TSA officials added that, in order to ensure enhanced mission focus, the agency will begin sending all new-hire TSOs to the TSA Academy at the Federal Law Enforcement Training Center in Glynco, Georgia rather than conducting the classroom portion of the NHTP at individual airports. The officials stated this would help standardize the new hire training and provide a sense to the new hires that they are part of something larger than just their local airport. TSA officials stated the first new-hire classes started at the TSA Academy in January 2016. To evaluate its training of TSOs, TSA generally follows the Kirkpatrick model, which is a commonly accepted training evaluation model endorsed by the Office of Personnel Management (OPM) and used throughout the federal government. Currently, using the model, TSA implements training evaluation surveys and conducts analysis of the responses for a select number of training courses. TSA’s goal for conducting Kirkpatrick- style training evaluation is to answer questions such as how well a training course met a learner’s needs; what knowledge and skill a course imparted to learners; what impact the training had on learner performance; and what the benefits of the training were. The Kirkpatrick model consists of a four-level approach for soliciting feedback from training course participants and evaluating the impact the training had on individual development, among other things. Table 2 provides a description of what each level within the Kirkpatrick model is to accomplish and TSA’s progress in implementing the levels. According to TSA officials, the agency is developing a training evaluation program that will allow it to standardize and expand training evaluation efforts. In 2013, TSA assessed its training evaluation practices and found that existing training evaluation efforts did not meet TSA’s needs because they lacked a formal, comprehensive approach to training evaluation. As a result, TSA identified the need to establish a formal training evaluation program, based on the Kirkpatrick model, to standardize its policy, processes, and procedures for evaluating training and has been working to establish the program since December of 2013. TSA’s Standards and Integration Office, within the Office of Training and Development, has developed a plan for implementing the new evaluation program, which is intended to support agency leadership in making decisions on how to use training resources. In addition, TSA expects to approve a Management Directive and Standard Operating Procedures for the training evaluation program by May 2016 to define the roles and responsibilities for TSA offices running the training evaluation program as well as lay out the steps for analyzing and reporting data collected from the training evaluations. TSA officials stated the training evaluation plan will be subject to annual revision, and OTD will continue to update and review the plan. Standards and Integration Office officials are responsible for developing the training evaluations and collecting the evaluation data while TSA’s Training Operations Division will administer the training evaluations. TSA’s training evaluation plan describes the types of stakeholders involved in training evaluation, the communications strategy for sharing information on training across the agency, and the reporting requirements for training evaluation. For example, the plan identifies, in broad terms, which Kirkpatrick Level evaluation reports the Standards and Integration Office Evaluations Team will generate, who will receive the reports, and how they will be used. In one example, the reporting plan shows that Levels 1, 2, and 3 reports should be sent to program managers to help them allocate screening resources and modify training. This program strategy, if followed, should position TSA to make data-based strategic decisions on the effectiveness of training courses once the training evaluation plan is fully implemented. For example, TSA plans to use training evaluation data to conduct curriculum reviews to improve training courses and programs. TSA is planning to implement its new training evaluation program in four phases. During the first phase, TSA plans to implement Level 1 and Level 3 training evaluations for their TSO Basic Training Program and for core operational courses, and to collect and analyze the data from these evaluations. In phase two TSA plans to expand Level 1 and 3 training evaluations to key courses in their National Training Plan. Phase two is scheduled to begin in late 2016. Once TSA has implemented these training evaluations for TSO Basic Training and for courses in the NTP, TSA plans to add selected Online Learning Center courses to their training evaluation program which constitutes phase three. Finally, in phase four, TSA plans to evaluate whether they need new training courses, and if so, all newly approved training courses would be required to develop an evaluation plan. TSA uses a variety of methods to measure the performance of its TSOs, including the Annual Proficiency Review (APR)—an annual certification program to evaluate TSOs’ skill in performing the various screening functions. Portions of the APR are computer-based X-ray image tests done in a non-operational setting away from the active checkpoints while the remaining tests are skills demonstrations performed in a realistic, but inactive, screening environment such as an unused screening lane. Which components of the APR an individual TSO must take are dependent on whether that TSO is certified to perform passenger screening, baggage screening, or has dual certification to perform both functions. TSA has other testing programs that take place during active operations at the checkpoints to assess TSOs’ level of adherence to screening SOPs and associated management directives. These include the Threat Image Projection (TIP) image testing; the Aviation Screening Assessment Program (ASAP); and Presence, Advisement, Communication, and Execution (PACE) covert testing. Table 3 provides a summary of TSO performance measurement tests. In addition to the ASAP covert testing and other tests detailed in Table 3 for assessing the effectiveness of TSOs in carrying out screening functions, the TSA Office of Inspection (OOI) Special Operations Division (SOD) regularly conducts independent covert “red team” testing to measure the effectiveness of TSA security systems and identify vulnerabilities in transportation security as a whole. TSA develops and deploys red team tests based upon current intelligence of threats against transportation systems. In addition to assessing TSOs’ ability to detect threat items similar to ASAP testing, OOI’s red team covert testing also assesses the effectiveness of other aspects of the screening operation— including screening procedures followed by the TSOs and the technology they use at the checkpoints. TSA policy requires FSDs to provide remedial training to TSOs who either fail components of the APR (before being allowed to retake those portions) or do not maintain a minimum score on TIP image tests. Similarly, TSOs who fail ASAP or red team covert tests—that is, operational tests—must take, in accordance with ATSA, remedial training before returning to their screening duties. TSA policy has not specifically required remedial training for any TSOs who failed PACE tests. Instead, each airport’s FSD was expected to make their own determination regarding any necessary retraining based on the PACE testing results. TSA data on the results of APR and PACE testing show that TSOs’ pass rates on both of these tests varied by airport risk category over the time periods we reviewed. Specifically, from calendar year 2009 through 2015, the percentage of TSOs that passed their APR certification tests on the first attempt remained relatively constant, with a dip occurring in calendar year 2010 followed by an increase by a similar percentage in 2015. According to TSA officials, this performance dip occurred because TSA ended the practice of using an outside contractor to evaluate TSOs during the APR tests. TSA officials explained that the TSA personnel who took over the evaluation function displayed less flexibility than was previously allowed in scoring of the various APR component tests in that first year after the transition (2010). According to TSA officials, the aforementioned changes to the APR testing program for 2015 (including practice runs prior to grading the practical skills evaluation portions of the test and dividing the testing by quarters) have led to improvement in the overall APR pass rates for 2015 compared to prior years. TSA officials explained that they decided to re- examine how they conducted APR testing and implemented the resulting changes in response to feedback from TSOs that certain aspects of the testing created unnecessary anxiety which affected their performance. As described earlier, APR consists of several component tests that evaluate specific TSO functions. As shown in table 4, these component tests include X-ray image testing and passenger pat downs, which cover actions taken by TSOs in routine screening operations at the passenger and baggage screening checkpoints. In addition to the overall APR pass rates varying by airport security category, the results of these individual component tests also varied by the type of test administered during the 2009 to 2015 timeframe. Scores for specific APR components tests are Sensitive Security Information and not included in this report. In addition, due to issues with both the reliability and sensitivity of TIP and ASAP testing, we are not discussing results of those testing programs in this report. The specific data reliability concerns related to these two testing programs are discussed later in this report. TSA also conducted PACE tests at category X, I, and II airports to determine TSOs’ adherence to TSA management directives and SOPs in areas such as overall appearance and demeanor, properly communicating and providing instruction to passengers, and following proper procedures. TSOs’ scores on PACE tests generally remained above 80 percent from fiscal years 2009 through 2014. Also, based on our review of PACE test results from fiscal years 2012 through 2014, we determined that TSOs scored higher at smaller airports than larger airports during this period with the difference being most pronounced between category X airports and category II airports. As noted previously, TSA normally uses APR testing results primarily to assess individual TSOs’ skills for performing screening functions in order to annually re-certify them to continue participating in screening operations. According to TSA officials responsible for developing the annual NTP, in fiscal year 2014, TSA’s Office of Training and Workforce Engagement (OTWE) also examined results of specific component APR tests to inform their development of related courses for the NTP. Specifically, the officials stated that they reviewed the results of selected 2013 APR component tests—screening of individuals with disabilities (IWD), bag searches, and standard pat downs. In response, the TSA training officials said they added training to the fiscal year 2015 NTP to specifically address the deficiencies they identified in their review of the 2013 APR component tests. TSA policy requires airport personnel to manually download TIP testing results from their individual X-ray machines and upload the monthly data into TSA’s national database repository for TSA results. According to TSA headquarters personnel responsible for overseeing TIP, they use these uploaded results to determine if any adjustments are needed to the quality or usefulness of the library of images maintained in the TIP system nationwide. For example, an image for which TSOs have a high degree of accuracy in identifying might be removed and replaced with an image that presents more of a challenge. Conversely, an image that is frequently missed might be reassessed to determine if the image is unrealistically difficult and an adjustment needs to be made. However, TSA’s database of TIP results is missing data for some airports for some years. Additionally, TSA does not analyze the TIP data it collects on a nationwide basis to identify potential trends in TIP test scores or opportunities for improving screener performance. While TSA uses data submitted by the airports to update its TIP image library, it is doing so with incomplete data. As shown in figure 4, some airports in all five airport risk categories did not report any TIP results nationally over the course of a year from fiscal year 2010 through fiscal year 2013. During the fiscal year 2009 through 2014 time frame, fiscal year 2013 had the highest percentage of airports failing to report any TIP data at nearly 14 percent. For category X and I airports, these results had generally improved by fiscal year 2014 with all of these airports reporting TIP data that year. However, the percentage of category III and IV airports that did not report TIP data generally increased during fiscal years 2013 and 2014 compared to prior years. TSA attributed this incomplete data to a transition to new X-ray screening equipment at certain airports from fiscal year 2009 through fiscal year 2012. Officials stated that, due to software compatibility issues with the new machines, TIP image capability was turned off for an extended period of time, meaning that TIP testing was not occurring on these machines and, therefore, TIP data were neither collected nor reported for these airports. TSA officials also told us that their older X-ray machines do not have the capability to automatically upload TIP data results to headquarters. As a result, some airports relying on these older X-ray machines were not able to submit TIP data automatically by electronic means and did not submit it manually. TSA officials reported that they do not have a process for determining whether TIP data have been submitted by all airports, on a regular basis, as required. TSA officials told us they are making efforts to install automatic uploading capabilities to all new machines that they expect will help ensure that TIP data reporting is complete and timely. However, TSA has placed these efforts on hold pending security concerns that must first be addressed stemming from the recent cybersecurity breaches at the Office of Personnel Management that have led to TSA reviewing its own cybersecurity efforts before moving forward with installation of automatic uploading capabilities on its X-ray machines. TSA officials also acknowledged that, in addition to the airports discussed above that did not report any TIP data for a year or more at a time, other airports may have reported only partial TIP results data during this same time frame. TSA officials stated that, in the nationwide results data provided to GAO, it would be difficult to ascertain how much data might be missing from individual airports (during the time period covered by our data) since the number and type of machines in use at those airports at any particular point in time could vary. TSA policy requires TSA officials at airports to report all of their TIP results data, on a monthly basis, to a national database. Further, FSDs must monitor TIP results monthly and require TSOs to attend remedial training if their threat identification rate falls below a target percentage. Standards for Internal Control in the Federal Government states that the information requirements needed to achieve the agency’s objectives should be identified and communicated to management. Specifically, management should obtain relevant data from reliable internal and external sources in a timely manner based on the identified information requirements that allows them to carry out their internal control and other responsibilities. We acknowledge that because the full universe of X-ray machines, and their uploading capabilities, is difficult to determine on a daily basis, it is unlikely that TSA can fully confirm whether all of the TIP data across the nation are being submitted. However, our review of TIP data from fiscal year 2009 through 2014 found that up to 14 percent of airports did not submit any TIP data in one of the years reviewed (2013). Unless TSA takes steps to ensure that all airports submit complete, nationwide TIP data, TSA lacks assurance that the decisions it makes on the content of the TIP image library are fully informed, and also lacks assurances that TSOs are receiving remedial training from the TIP program which has been developed to aid their ability to identify prohibited items. For example, while TSA is working to install automatic uploading capabilities on all X-ray machines, enforcing the requirement for airport officials to manually submit their TIP data would help ensure more complete data by which to assess and address TIP results. In addition, by not ensuring the collection of available TIP data, as required, the effectiveness of any potential further use of TIP testing results to inform TSO training or testing (as described below) programs is limited. With regard to any potential further use of the TIP results, TSA headquarters officials told us that, to date, they have not systematically used the TIP results data to analyze national trends for purposes of informing future training programs or changes to screening processes or procedures. TSA officials said that they have not used national TIP data in this manner due to the agency’s expectation that TIP is a tool primarily for the benefit of local FSDs to use in monitoring the training needs, and determining areas of focus, for their individual TSOs locally. TSA officials at all 10 airports we contacted stated that their FSDs monitored TIP results and used TIP data to inform their decisions on remedial or other training needs of their TSOs. According to the TSA headquarters official responsible for overseeing the TIP program, TSA formed an Integrated Project Team in fiscal year 2015 specifically tasked with studying, developing, and implementing an effective nationwide strategy and process for using TIP testing to enhance TSOs’ threat detection skills. In developing the planned strategy, this team is examining six focus areas— including the improvement of TIP capabilities for enhancing TSO effectiveness through improved remedial training and updating the TIP image library to be responsive to emerging threats. Since the team is newly formed, it has yet to complete its work. Due to the fact that the bulk of the team’s work is yet to be done, it is unclear how or whether these six focus areas include plans to monitor, on a national basis, trends in the results of TIP testing that could help highlight areas for improvement to future image-based screening tests (such as the Image Mastery Assessment component of APR testing) or TSO training. Standards for Internal Control in the Federal Government states that an agency’s management should perform ongoing monitoring of its internal control system and associated operations, evaluate the results of those monitoring activities, and take corrective actions when warranted to achieve objectives and address risks. By not including analyses of TIP results data in nationwide efforts to inform either TSO training or other image-based testing outside of TIP, TSA is missing an opportunity to utilize this extensive, nationwide TSO performance data for enhancing screening operations in addition to lacking assurance that remedial training is occurring, as required, at all airports. In an effort to assess the quality of ASAP testing conducted by TSA field officials at commercial airports, TSA headquarters officials brought in a contractor in fiscal year 2015 to independently perform ASAP covert testing at 40 airports and thereby verify the validity of the testing results at the airports. The contractor personnel performed the same type of ASAP testing that had previously been performed by local TSA personnel at the airports. The contractor’s initial round of covert testing was completed in October 2015, and TSA has analyzed the results of the contractor’s tests and compared them to ASAP tests performed previously at the 40 airports. In doing this analysis, TSA found differences in the test results for most of the 40 airports when comparing the contractor’s results versus the local TSA testers’ results for the same airports. According to TSA officials, TSOs at these 40 airports performed more poorly in the ASAP tests conducted by the contractor personnel as compared to the prior ASAP testing done by the local TSA personnel—indicating that these prior-year pass rates were likely showing a higher level of performance than was actually the case. Also, according to the officials, these differences in test results have led them to question the extent to which the ASAP tests accurately measure TSO performance. TSA is in the process of determining root causes for these variances of testing results between the contractor and TSA personnel at the airports. According to TSA officials, initial results from the contractor’s work seem to confirm their prior concerns (before the contractor testing was conducted) that problems exist with successfully maintaining the covert nature of tests at airports. TSA officials explained that these prior concerns were based on the high detection rates at some airports when compared to other airports on the same tests. With respect to the difficulty in maintaining the covert nature of the tests, TSA officials at 7 of 10 of the airports we contacted indicated challenges with obtaining anonymous role players to ensure that the ASAP tests remain covert. For example, TSA officials at one airport we visited reported having to rely on the availability of state and local government employees and U.S. Customs and Border Protection personnel to perform as role players. Another smaller airport we visited reported challenges finding role players among local TSA personnel that the TSOs working the screening lanes would not recognize. As a result, they tend to use new hires, National Guard, Federal Aviation Administration, and Federal Bureau of Investigation personnel. TSA officials stated that proposed changes to the ASAP SOP will provide FSDs greater authority to use role players that they have vetted and accepted responsibility for, beyond state, local, and federal government officials. In an effort to address concerns stemming from their initial analysis of the contractor’s test results, TSA briefed its FSDs on these results and stated that it expects the FSDs will use this information as input in overseeing their local ASAP testing programs. In addition, TSA has extended the work of the contractor by 6 months in order to do further testing that it can compare to local ASAP test results going forward. TSA stated it will continue to analyze the contractor’s results and compare them against the ongoing results from local ASAP testing overseen by the FSDs to determine if the previously-identified variances in results are continuing. TSA officials stated that the findings of the contractor during the 6-month extension period indicated that the variances previously identified in results for the contractor testing versus the local ASAP testing at the airports have been reduced. TSA headquarters officials attributed the reduction in variance to more frequent and improved communication with the FSDs and those responsible for conducting the local ASAP tests— specifically with regard to the contractor’s test findings and potential corrective actions they should undertake to improve the local ASAP testing programs. TSA headquarters officials added that it is through these measures that they are improving the accountability of the local FSDs and their staff for ensuring the quality and reliability of the local ASAP testing going forward. TSA officials added that, after the start of the contractor’s work, they had initiated an effort to improve aspects of the ASAP testing program that will include better identification of root causes for ASAP testing failures, which they expect will improve the development of associated corrective actions moving forward. This effort is still ongoing and also includes merging aspects of PACE testing into the ASAP program to help identify instances where a lack of standardization in the application of specific screening SOPs (which PACE testing is designed to measure) may negatively impact the screening process. Regarding TSA’s efforts to better identify root causes of ASAP failures to improve the program, TSA has developed a data collection tool that TSA officials said would support these efforts by gathering critical data from test failures that they will analyze to determine root causes of the failure. According to TSA officials, the tool has recently been developed and field tested and is pending initial roll out. In addition to the ASAP/PACE merger, program enhancements related to the identification of root causes, and ongoing contractor ASAP testing, TSA officials are adding ASAP headquarters testing to supplement the ASAP testing that will continue to be performed by TSA personnel in the field (referred to as the Field Evaluation Team or FET). TSA stated this new headquarters-based testing effort will be referred to as the Headquarters Evaluation Team (HET) and would be formed from the former PACE evaluation teams. According to TSA, these headquarters- based covert testing teams will perform quality assurance and validation activities for ASAP that are currently being performed by the contract test teams. In addition, TSA expects that the contractor and new headquarters ASAP testing program will provide assurance that the ASAP testing still being conducted by TSA personnel at the airports is accurate. However, field ASAP testing will still account for the majority of TSA’s ASAP covert tests. TSA officials stated they expect once the HET program is initiated, the contractor testing will be discontinued. Also, according to TSA, a newly-developed data collection tool will be used by all of the ASAP testing groups moving forward (i.e., FET, HET, and the contract test teams) to determine the root causes of test failures that will better inform TSA’s corrective actions. TSA conducts ASAP testing in 6-month increments and produces a summary report of results across all airports, complete with recommendations, at the end of each 6-month cycle. In these reports, TSA details the analysis it has performed on the nationwide results of the ASAP testing that shows how TSOs have performed in their duties at the various decision points on the passenger and checked baggage screening lanes. This analysis includes failure rates at these various points, reasons for the failures, and related recommendations where appropriate to improve TSO performance. These recommendations may include, among other things, additional training for certain points in the screening process and further testing in certain areas. According to TSA officials, they have recently moved to more frequent weekly and monthly reporting of ASAP results to the field as part of the aforementioned effort to improve communication with FSDs and staff with regard to findings and trends coming from the ASAP testing results—including those results from the ASAP contractor. TSA headquarters does not require FSDs to implement recommendations from the six-month cycle reports nor does it track whether the recommendations have been implemented, or conversely, reasons for not implementing them. TSA officials stated that the various recommendations cited in the cycle reports are strictly for the consideration of FSDs in the field and implementation is not mandatory. TSA officials also stated that the ASAP cycle reports are intended to analyze nationwide trends in TSO performance and identify causes of potential deficiencies. TSA invests time and resources to produce these reports—which include test results and corrective actions—on a routine basis and disseminates the information to airport FSDs. Given this investment, tracking implementation of the recommendations detailed in those reports, in addition to any recommendations that may be present in the more frequently-implemented weekly or monthly reporting, would help TSA ensure that corrective actions are being taken at airports nationwide to improve TSO performance, which the agency has identified as an area of concern based on the nationwide trend analysis. Moreover, tracking the implementation of its recommendations, including the extent to which identified corrective actions are improving future TSO performance and test results, will help TSA better determine the extent to which its implemented recommendations are leading to improvements in screening operations and appropriately addressing identified root causes for previous test failures. Standards for Internal Control in the Federal Government requires that internal controls be designed to ensure that ongoing monitoring occurs during the course of normal operations. Specifically, internal controls direct managers to (1) promptly evaluate and resolve findings from audits and other reviews, including those showing deficiencies and recommendations reported by auditors and others who evaluate agencies’ operations; (2) determine proper actions in response to findings and recommendations from audits and reviews; and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. We recognize the efforts TSA has recently initiated to improve the accuracy and reliability of ASAP testing. However, without the assurance that recommendations for corrective actions based on the root causes identified in ASAP testing will be fully implemented—where appropriate—nationwide, TSA will be limited in its ability to take full advantage of any findings from the program. Training TSOs and obtaining an accurate understanding of their effectiveness in detecting prohibited items on passengers, and in their baggage, can have a critical impact on the security of millions of air travelers each year. TSA has put an extensive program in place to train its TSOs to perform these critical screening functions and responded to recent covert test findings of the DHS OIG by implementing a retraining program for all its screening officers to address issues identified in the testing. TSA has also begun implementing a plan to expand evaluations of its TSO training efforts in order to better inform future management decisions. In addition to its training and evaluation efforts, TSA conducts wide-ranging covert testing and annual certification testing of its TSOs. While we commend TSA’s recent efforts to re-examine its testing programs, such as steps to improve the accuracy and reliability of ASAP testing, the agency could further enhance its testing programs to more accurately gauge the true level of TSO performance and ensure continuing improvement in screening operations. For example, enforcing its requirement that all airports submit TIP results data would help TSA continually improve the test. Further, the agency could use these data on a nationwide level to inform and potentially improve training of TSOs in screening passenger carry-on baggage for prohibited items. In addition, given that TSA uses ASAP covert testing results to assess whether TSOs follow proper screening procedures and successfully detect prohibited items, ensuring that any recommendations stemming from the ASAP testing failures are tracked and implemented, where appropriate, would further support the program’s objective to improve the performance and quality of security screening. To improve TSA’s ability to take full advantage of testing results to inform and potentially improve screening operations, we recommend that the Secretary of the Department of Homeland Security direct the Administrator of TSA to take the following three actions: Ensure that TSA officials at individual airports submit complete TIP results to the TSA national database as required, including manually submitting data when automated uploading is not available. Conduct analysis of national TIP data for trends that could inform training needs and improve future training and TSO performance assessments. Track implementation by airports of ASAP recommendations to ensure that corrective actions identified through ASAP testing are being applied. We provided a draft of the sensitive version of this report to DHS for their review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix II, and technical comments, which we incorporated as appropriate. DHS concurred with all three recommendations in the report and described actions underway or planned to address them. With regard to the first recommendation that TSA ensure TIP data is submitted to the TSA national database as required, DHS concurred and stated that TSA is working to establish a tracking system that will automatically identify and highlight specific airports that may be missing from the database. The automated system will allow TSA to establish an internal webpage that will automatically generate a list of airports that have not submitted TIP data as required, and which managers will be able to use to follow-up with Federal Security Directors to ensure TIP data is submitted. The agency stated that the automated process is dependent on the development of an information technology (IT) tool which they anticipate will be piloted by May 31, 2017. In the interim, while this IT tool is being developed, TSA officials will monitor compliance with TIP reporting requirements and follow up with those airports missing TIP data, including identifying reasons for the airport’s non-compliance. TSA is also drafting a revised TIP Operations Directive that is intended to provide further guidance and direction to the field on TIP requirements. TSA estimates they will complete these actions to address the first recommendation by September 30, 2016. With regard to the second recommendation to conduct analysis of national TIP data for trends that could inform training needs and improve future TSO performance, DHS concurred and detailed the following actions to address this recommendation: TSA’s Office of Training and Development (OTD) has begun to update TIP remediation requirements and work with airports that have achieved the highest TIP scores to identify any best practices that could be shared with other airports. OTD plans to work with airports that struggle with TIP to identify information about their oversight and remediation program with the goal of using the highest and lowest scoring airports to assess the effect of oversight and remediation on performance. TSA plans to analyze data across the network to determine what remediation training best supports improvements in TIP scores. TSA is developing a process to analyze specific data connected to threat categories of TIP images which will allow officials to identify the specific types of threats that are presenting challenges to the workforce. OTD will then be able to identify what additional training should be developed to improve performance for that particular threat category. TSA plans to assess TIP training and assessments over the next 12 months to determine if performance improvement has been realized, and if so, what contributed to the improvement. OTD is working with a contractor to design a report that is intended to capture officer performance results connected to specific types of TIP images to better drive training content and improve performance. TSA’s Office of Security Capabilities is working with both OTD and the Office of Security Operations (OSO) to capture TIP data for the development of threat categories to assess individual TSO’s performance and asking TSA’s Office of Acquisitions for a contract modification that will provide for more frequent report updates. TSA estimates they will complete these actions to address the second recommendation by May 31, 2017. With regard to the third recommendation to track implementation by airports of ASAP recommendations to ensure that corrective actions are being applied, DHS concurred and stated that TSA has taken actions to formalize ASAP reporting. For example, TSA has reported developing a standard format for Corrective Action Plans, which are submitted and implemented after an ASAP failure. This should help TSA track corrective actions and their effectiveness in addressing findings from ASAP tests. Further, TSA plans to conduct reassessments within 30-60 days after a Corrective Action Plan has been submitted to ensure corrective actions have been implemented. TSA also reported that the standard format for CAPs deliberately maps corrective actions to their identified issues. According to TSA, as of August 2016, OSO has conducted more than 55 post-Headquarters Evaluation Team testing calls and more than 50 effectiveness calls to review CAPs. OSO has extracted common themes from high performing airports and distributed this “best practice” information to all its regional directors and federal security directors. TSA also stated that OSO is reassessing those previously-tested airports to ensure that corrective actions are implemented and detection performance is improving at or above the national average. These efforts by TSA to ensure that corrective actions identified through ASAP testing are being applied, if continued in future testing cycles, should address the intent of this recommendation. These completed actions for the third recommendation along with the planned actions for the first and second recommendations, if fully implemented, should address the intent of the three recommendations contained in this report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, the Attorney General of the United States, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or groverj@gao.gov. Key contributors to this report are listed in appendix III. This report answers the following questions: 1. How does the Transportation Security Administration (TSA) train Transportation Security Officers (TSO), and to what extent does TSA evaluate the training? 2. How does TSA measure the performance of TSOs, and what do the performance data show? 3. To what extent does TSA use TSO performance data to enhance TSO performance? To address our first objective regarding how TSA trains TSOs and to what extent TSA evaluates the training, we reviewed relevant TSA policies and procedures for training, including management directives and the National Training Plan (NTP), which prescribes the annual training curriculum for TSOs. We also reviewed documentation on training requirements, including those contained in the Aviation and Transportation Security Act, as well as documents on TSA’s training development and completion. We interviewed TSA headquarters officials responsible for developing and monitoring TSO training, including officials from TSA’s Office of Training and Development (OTD), Office of Human Capital (OHC), and the Office of Security Operations (OSO). Further, we interviewed staff from a total of 10 airports—including Federal Security Directors (FSD), transportation security managers, instructors, training managers, TSOs, and other TSA staff, such as explosives experts, to determine how training is carried out in the field and to learn what TSA employees in the field thought about training. Specifically, we conducted site visits to six airports, including three airports in category X, and one airport each in Categories I, II, and III. Further, we conducted phone interviews with officials at one airport each in categories I, II, III, and IV to obtain additional perspectives on how airport officials carry out training requirements locally—particularly at airports with smaller numbers of flights and passenger boardings. We selected the airports to visit in person based on factors such as airport category, geographic proximity to one another, and our analysis of the airports’ TSO performance on annual screening certification tests from calendar years 2009 through 2014. For example, we calculated the average first time pass rates for screeners taking their Annual Proficiency Review (APR) exams for each airport in each calendar year from 2009 to 2014 and sorted the scores by airport risk category. APR assessments are annual certification tests TSOs must pass to remain employed as a screener. We then selected at least one airport from the high, low, and middle of the performance distribution and made sure to cover at least one airport in every risk category. To assess the extent to which TSA evaluates TSO training, we reviewed TSA documents used for evaluating training courses, including end-of- course surveys administered to learners. Further, we reviewed draft documents on TSA’s training evaluation plan, including a draft management directive and draft standard operating procedures for evaluating training courses. We compared the training evaluation documentation to the Kirkpatrick model for training evaluation, which is the model TSA uses as guidance for its evaluations of TSO training. We also interviewed TSA headquarters officials responsible for evaluating TSO training and for developing and implementing the TSA training evaluation plan. For example, we interviewed TSA officials from OTD, OSO, and OHC to determine the extent to which they evaluated training courses and used this information to refine future training. Further, we interviewed management officials at each of the airports we visited to further understand how, if at all, training at individual airports is evaluated locally. For our second objective, to determine how TSA measures the performance of TSOs and what the performance data show, we analyzed data from four different performance evaluation programs and we interviewed TSA officials responsible for collecting and analyzing the data. First, we reviewed and analyzed data on APRs, including analyzing APR pass rates from calendar year 2009 (the first year for which data were available) through 2015. For example, we calculated the average first time pass rate for screeners taking the APR assessments for each airport and sorted the results by year, airport category, and by each individual APR assessment. See Table 4 for a description of the APR assessments we analyzed. We then conducted a trend analysis to observe overall APR first-time pass rates over time, and we compared APR first-time pass rates for screeners across airport risk categories to determine whether there were any differences in pass rates across airport categories. In addition, we interviewed officials in charge of the APR testing process, including officials from OSO, OHC, OTD from TSA headquarters, as well as local airport officials in charge of overseeing the tests. Second, we reviewed Threat Image Projection (TIP) system data from fiscal year 2009 to fiscal year 2014, the last year available at the time of our data request. The TIP system is intended to help TSA measure whether operators correctly identify threat items that are electronically superimposed on the X-ray monitor during the screening of passenger property at the checkpoint. Specifically, we analyzed the average percentage of TIP images correctly identified during screening by screeners at different airport categories over time to determine whether there were differences in average TIP scores between airport categories. Further, we interviewed TSA officials in charge of the TIP image library from the Office of Security Capabilities to understand how TIP data are recorded and collected, and how the TIP images are selected for use. Third, we reviewed data from TSA’s Presence, Advisement, Communication, and Execution (PACE) testing program, which TSA uses to measure whether TSOs are adhering to standard operating procedures while screening at the passenger checkpoint. We reviewed PACE data from calendar year 2011, the year the program was started, until 2014 and charted PACE scores by airport category across time. We also interviewed appropriate TSA officials regarding the PACE program to understand how the program worked and how the scores were calculated. Finally, we analyzed data from the Aviation Screening Assessment Program (ASAP), a covert testing program used to evaluate screeners’ ability to properly follow TSA’s standard operating procedures for screening and keep prohibited items from being taken through the checkpoint. We analyzed ASAP data from fiscal years 2013 through 2015 because TSA made adjustments to the ASAP testing program in 2013, and therefore the pre-2013 testing data are not comparable to the 2013 through 2015 data. Results of ASAP testing are classified at the secret level and are not included in this report. Additionally we interviewed TSA officials from OSO responsible for the ASAP program to gain their perspectives on the program. We also interviewed officials responsible for conducting ASAP tests at each of the airports we visited to understand how the tests worked in practice, what happened after a test was passed or failed, and to learn about any challenges officials faced in running the tests. We assessed the reliability of the APR, TIP, PACE, and ASAP data by (1) interviewing agency officials responsible for maintaining the data about how the data were collected and entered into the respective databases, how the data were used, and what procedures were in place to ensure the data were complete; and (2) testing the data for missing data, duplicates, or entries that otherwise appeared to be unusual. We found the APR and PACE data to be sufficiently reliable to present in this report. However, we found that the TIP data were incomplete for the years we were analyzing and therefore not sufficiently reliable to include in this report. Specifically, TSA officials in charge of the TIP data stated they were uncertain how complete the TIP data were nationwide at any point in time, but added that it is likely never fully complete. Officials stated that this was due to two reasons. First, when TSA first deployed new X-ray machines between 2009 and 2012, the TIP software was not activated on them due to technical issues. As a result, no TIP data were reported for those machines over this period. Second, the newer X-ray machines coming online are equipped to upload TIP data to TSA headquarters automatically over a network. However, not all machines in the field are equipped to do this, and TSA temporarily stopped implementation of the automatic upload capability on the newer machines in 2015 because of network security concerns. Instead, TSA personnel must manually download the TIP data for these machines on a monthly basis as it does for older machines without this automatic upload capability. As a result, TSA headquarters has not received TIP data from every airport for every month over the time period of our review resulting in the database being incomplete. TSA could not provide us with information on the extent of the missing data and we were not able to determine based on the data provided how many X-ray machines were unaccounted for between 2009 and 2014. For our third objective, to determine the extent to which TSA uses TSO performance data to enhance screening performance, we reviewed TSA’s processes and actions for using screener testing results to inform its operations and training, and assessed these processes against standards in Standards for Internal Control in the Federal Government. Further, we interviewed program officials from several offices at TSA headquarters about how they analyze performance data such as APR, TIP, and PACE data, and how, if at all, they use the results to adjust training or take any other actions. Similarly, we interviewed officials from each of the airports we visited about how the collected, reported, monitored, and used the performance data they collected. We conducted this performance audit from February 2015 to September 2016, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Jennifer Grover (202) 512-7141 or groverj@gao.gov. In addition to the contact named above, Christopher E. Ferencik, Assistant Director; Mike Harmond, Analyst in Charge; and Brendan Kretzschmar made key contributions to this report. Also contributing to the report were, Eric D. Hauswirth, Susan Hsu, Thomas F. Lombardi, Heidi Nielson, Ying Long, Amanda Miller, Ruben Montes de Oca, Dae Park, and Christine San.
TSA trains TSOs to screen passengers and baggage for items that could pose a threat at nearly 440 airports across the country. One way TSA and the Department of Homeland Security (DHS) Office of Inspector General (OIG) measure TSO performance is through covert testing of TSA screening operations. In response to the findings from recent DHS OIG covert testing, the Secretary of DHS directed TSA in June 2015 to conduct further training for all TSOs and supervisors. GAO was asked to review TSA's efforts to train and test TSOs. This report examines (1) how TSA trains TSOs and evaluates the training; (2) how TSA measures TSO performance and what the data show; and (3) to what extent TSA uses TSO performance data to enhance TSO performance. GAO analyzed TSO performance data from 2009 through 2015, reviewed documents regarding TSA training and testing, and interviewed TSA officials at headquarters and 10 airports. GAO selected these airports based on airport risk categories, among other things. Information from these airports was not generalizable, but provided insights into TSO training and testing. This is a public version of a sensitive report that GAO issued in May 2016. The Transportation Security Administration (TSA) uses a variety of programs to train and evaluate Transportation Security Officers (TSO) who are responsible for screening passengers and baggage for threats to aviation security. For example, by law, TSOs must complete 40 hours of classroom training, 60 hours of on-the-job training, and certification tests before performing screening. Once certified, TSA requires TSOs to complete annual training under the National Training Plan. Since 2013, TSA has been phasing in a program to evaluate its training to inform use of training resources. TSA expects that this evaluation program should help the agency determine how well training meets TSOs' needs, provides them with needed knowledge and skill, and has an impact on their performance. TSA measures TSO performance in various ways, including (1) annual proficiency reviews, which certify TSOs by evaluating their ability to carry out screening standard operating procedures; (2) assessments of X-ray machine operators' ability to identify prohibited items by displaying fictional threat items, such as guns or explosives, onto X-ray images of actual baggage; and (3) covert testing programs that use role players to take prohibited items through screening checkpoints to test TSOs or determine how TSOs interact with the public, among other things. Over the time periods GAO reviewed, TSA data on the results of annual proficiency reviews and covert testing on how TSOs interact with the public show that TSOs' scores (pass rates) varied by airport security risk category. GAO is not providing TSOs' scores for annual proficiency reviews, X-ray machine operator assessments, or covert testing for prohibited items at checkpoints in this report due to the sensitive or classified nature of the data or the data reliability concerns discussed below. TSA has made use of annual proficiency review data to enhance TSO training, but its use of other testing data is constrained by incomplete and unreliable data. Specifically, due to software compatibility issues and a lack of automatic uploading capability, airport reporting on assessments of X-ray machine operators was not complete, as required by TSA policy, for each year of data GAO examined (fiscal years 2009 through 2014), limiting their reliability and use to enhance TSO training. In addition, for the data it does collect on these assessments, TSA has not taken steps to analyze these data nationwide, which could help the agency identify potential trends or opportunities to improve TSO performance. Furthermore, in 2015, TSA determined that prior year results of one of its two covert testing programs to assess TSOs' ability to identify prohibited items at checkpoints were unreliable, resulting in pass rates that were likely higher than actual TSO performance. TSA has since taken steps to enhance reliability by hiring a contractor to perform independent validation testing, among other things. Finally, TSA does not require or track implementation by field personnel of national recommendations related to these covert tests, thereby limiting the agency's ability to take advantage of the corrective actions identified from the tests. GAO recommends that TSA (1) collect complete data on assessments of X-ray machine operators, (2) analyze these data nationally for opportunities to enhance TSO performance, and (3) track the implementation of covert testing recommendations. TSA concurred with the recommendations.
The Federal Acquisition Regulation is the primary regulation for use by all federal agencies in acquiring supplies and services. Sections 46 and 52, which discuss and set forth quality assurance and contract clauses, respectively, provide guidance in determining quality assurance responsibilities. According to the Federal Acquisition Regulation, government contract quality assurance is defined as the various functions, including inspections, performed by the government to determine whether a contractor has fulfilled the contract obligations pertaining to quality and quantity. These inspections can either be conducted at the contractor’s place of manufacture and production (source) or at the receiving location for the parts (destination). DCMA is DOD’s primary organization for performing quality assurance oversight for contracts and this oversight responsibility typically only covers prime contractors. DCMA’s execution of its quality assurance responsibility is primarily through its source inspection program. Source inspections are defined as inspections at the point where goods are manufactured or assembled. There are three types of source inspections: physical inspection; contractor process reviews; and kind, count, and condition. Physical inspection involves inspecting parts by comparing the parts to a specification, drawing, or other instruction. Contractor process reviews are inspections of processes and procedures for establishing confidence that the procured parts will produce a desired outcome. Inspections involving kind, count, and condition are inspections intended to visually identify and verify the quantity and exterior appearance of a part to determine if it visually meets contract specifications. During fiscal year 2003, DCMA was responsible for government source inspection for approximately 273,000 contracts. The Federal Acquisition Regulation requires that contracts include inspection and other quality requirements that will protect the interest of the government. It also provides guidance in establishing which clauses to include in the various types of contracts. The clauses included in each contract dictate the type and level of quality assurance oversight to be performed. DCMA quality assurance specialists are expected to follow the inspection and acceptance provisions in each contract in determining whether they are required to perform quality assurance oversight. If the contract states that inspection and acceptance will be at destination, then DCMA does not perform any quality assurance oversight. End users within military units, such as Army battalions and Air Force squadrons, perform inspection and acceptance for these contracts. Questions have been raised about the efficiency of DCMA’s quality assurance oversight program. In October 2003, the DOD Inspector General reported that of the 518 contracts requiring DCMA source inspection that they reviewed, at least 172 of the inspections provided either nominal or no value to the DOD quality assurance process. The DOD Inspector General also pointed out concerns in the DOD quality assurance program, including (1) ambiguity in the level and extent of requested source inspections; (2) inconsistent and unclear application of items defined as critical or having a critical application; (3) inconsistent implementation of inspection procedures for items considered commercial, off-the-shelf; and (4) arbitrary and inconsistent inspection procedures for items purchased from distributors. In our review of 15 contracts awarded to 11 contractors, DCMA provided quality assurance oversight and enforcement over these spare parts prime contractors. DCMA used three types of inspections to perform quality assurance oversight over the contractors. DCMA adhered to the Federal Acquisition Regulation, contract quality requirements, and DCMA guidance in providing quality assurance oversight over these prime contractors. Federal Acquisition Regulation clauses included in the contracts guide whether or not DCMA would perform quality assurance oversight. DCMA provides oversight for those contracts requiring inspection at the place of manufacture of the parts. Enforcement of spare parts quality and safety during the production process was achieved through the issuance of corrective action requests. DCMA quality assurance specialists followed established policies, procedures, and guidance in performing their oversight over the spare parts prime contractors in our review. As shown in table 1, the quality assurance specialists performed one or a combination of three types of inspections over prime contractors. The three types of inspections include: physical inspection to measure the dimensions of parts or to test the parts; process reviews; or an observation of the kind, count, and condition of the parts. At 7 of the 11 contractor locations , DCMA quality assurance specialists performed both process reviews and physical inspections. For physical inspection of spare parts, the quality assurance specialist selects a sample of the parts from various production runs and inspects them by comparing the parts to a specification, drawing, or other instruction. For example, at one location, the prime contractor had procured the part from a subcontractor and required the subcontractor to provide test data related to the part. The quality assurance specialist reviewed the test data and compared it to the part specifications. Also, the quality assurance specialist inspected the number imprinted on the parts, exterior painting, and workmanship of the received parts to ensure they complied with contract specifications. The second type of inspection involves evaluating the prime contractors’ processes to determine compliance with established contract requirements and production procedures. During these inspections the quality assurance specialist assesses the prime contractor’s processes and production line procedures in relation to established industry practices and provides the contractor an early opportunity to make corrections or improvements, if necessary. For example, at one prime contractor location, the quality assurance specialist monitored the prime contractor’s key processes during the production, including the cleaning, painting, welding, and final quality inspection of the product. To help ensure a comprehensive quality assurance check, the quality assurance specialist performed his daily quality checks at different phases of the production process. Also, the quality assurance specialist reviewed the prime contractor’s procedures for selecting its subcontractors to determine if the subcontractors were certified in accordance with the applicable industry standards. The third type of inspection consists of observing the kind, count, and condition of the parts. Observation of the kind of part includes visual identification of at least one part for each different part being procured under the contract and verifying the part number against the number required in the contract. Counting the parts involves visual confirmation of the contents of one package per line item and counting the number of packages received. The quality assurance specialists verify the physical appearance of the parts to assess their condition. For example, at one location, the quality assurance specialist performed a kind, count, and condition inspection to confirm that the contractor had the correct part by verifying the part number, checking to ensure the contractor had the proper quantity of parts, and inspecting the outward appearance of the parts. DCMA’s quality assurance oversight over the 11 spare parts prime contractors in our review was in accordance with the Federal Acquisition Regulation, contract quality requirements, and DCMA’s One Book guidance. Each contract we reviewed designated the location of inspection and acceptance by contract line item. Contracts that were designated for inspection and acceptance at the contractor’s place of manufacture and production received DCMA quality assurance oversight. When contracts are designated for inspection and acceptance at the receiving location for the parts, the end user is responsible for inspecting the procured part and the DCMA quality assurance specialist typically does not get involved with the contract. Of the 15 contracts we reviewed, DCMA provided quality assurance oversight for the 13 contracts that were designated for inspection and acceptance at source. The other two contracts were designated for inspection and acceptance at destination and did not require DCMA quality assurance oversight. In accordance with the One Book guidance, quality assurance specialists performed inspections and acceptance for its customers to ensure supplies were in compliance with contract requirements. According to a DCMA official, this guidance allows DCMA quality assurance specialists flexibility in providing quality assurance oversight over prime contractors. For example, the guidance does not include standard requirements for the number of tests, site visits, or inspections that the DCMA quality assurance specialists should perform while providing quality assurance oversight. When contracts were designated for inspection and acceptance at source, quality assurance specialists typically reviewed contract requirements, assessed the contractor’s risks of producing nonconforming parts, determined what needed to be done to mitigate the risks, and applied quality assurance oversight, including inspections, based on the contractor’s risk level. During the production process, when a prime contractor’s processes or spare parts did not meet contract requirements, DCMA used an enforcement system that involved issuing requests for corrective action by the prime contractor. According to the Federal Acquisition Regulation, contractors must be given an opportunity to correct or replace nonconforming supplies. Contractors are notified about nonconformance through corrective action requests issued by DCMA or product quality deficiency reports issued by the end user. When there is contractual nonconformance during the production process, DCMA may issue the prime contractor a corrective action request to formally communicate the deficiency and request corrective action on the part of the prime contractor. When the prime contractor does not take corrective actions, contractual remedies available to procuring contracting officers include suspension of progress payments, termination for default, and penalties such as suspension or debarment from holding contracts with the government. Only 2 of the 11 prime contractors that we reviewed received corrective action requests related to the contracts in our review. According to DCMA officials, the remaining contractors did not warrant corrective action requests related to the contracts we reviewed. Our review of DCMA files related to these contractors also did not identify any need for corrective actions by the prime contractors. Corrective actions identified by the two contractors involved the prime contractor making corrections or changes to its production processes and demonstrating how processes would be improved to prevent further instances of nonconformance. For example, DCMA issued a corrective action request to a prime contractor identifying loose insulation, an inactive gear, poor painting quality, and parts that did not meet surface finish specification requirements as nonconforming items. The prime contractor corrected the nonconformance and DCMA accepted the product. To prevent reoccurrence of the deficiencies, the prime contractor reported that it had taken the following actions: (1) discontinued the use of material that caused the loose insulation, (2) instructed operators to ensure that gears were properly adjusted, (3) arranged a meeting with all contractor paint personnel advising them on the importance of attention to detail, and (4) agreed to provide their quality assurance representatives with acceptable and unacceptable finish samples as visual standards to meet customer expectations. For the other prime contractor, DCMA issued a corrective action request because the contractor used the wrong procedure to receive approval for a major waiver from contract requirements. The request for major waiver was supposed to go through an array of signatures and DCMA approvals, whereas a minor waiver could be submitted and approved electronically, requiring fewer signatures and approvals. However, the prime contractor downgraded the waiver request from major to minor without the appropriate concurrence and approval of DCMA and submitted the request through the electronic system. According to the prime contractor, their personnel had conflicting procedures describing how to process waivers. To prevent reoccurrence of incorrect processing of major waivers, the contractor planned to review its current procedures for processing waivers. Also, the contractor planned to train its quality assurance staff to ensure they understand the correct procedures for processing waivers. The prime contractors in our review adhered to industry standards in providing quality assurance oversight over their subcontractors’ work. Based on industry standards, prime contractors performed at least two or up to four methods to provide quality assurance oversight over their subcontractors, as shown in table 2. The primary methods of oversight used were evaluating subcontractors for placement on an Approved Supplier List and requiring certifications of parts and processes. Industry standards, such as the International Organization for Standardization 9001 and Aerospace Standards 9100, require that an organization have a quality management system in place to ensure that it will produce high quality products that will serve their intended purpose. The standards are broad and are intended to be applicable to all organizations, regardless of type, size, and product provided. All of the prime contractors that we reviewed evaluated potential subcontractors prior to the contract award for placement on their Approved Supplier List. The purpose of establishing an Approved Supplier List is to identify qualified subcontractors capable of producing needed parts or processes in accordance with industry standards and contractual specifications. When evaluating a potential subcontractor for inclusion on their Approved Supplier List, some prime contractors periodically visited their subcontractors’ production facilities, requested that subcontractors complete surveys containing questions regarding the subcontractors’ capabilities and qualifications necessary to produce parts and processes, or examined information about the technical skills and qualifications of subcontractor personnel, past performance for producing similar products, and applicable certifications related to the subcontractors’ operations. Six of the 11 prime contractors said they periodically visited potential subcontractors prior to contract award. All of the prime contractors that we reviewed required certifications such as independent, third-party certifications or certificates of conformance from their subcontractors to certify that their parts and processes are in accordance with contract requirements and industry standards. Independent, third-party certifications and certificates of conformance served as verification that the subcontractors could produce parts and processes that conformed to contractual specifications. Prime contractors required certifications from their subcontractors for different phases of the production process. For example, one contractor used steel to fabricate parts and required a certification from the steel subcontractor that the steel had been produced according to specifications. In this instance, the prime contractor did not use the steel provided by the subcontractor until they received the certificate of conformance verifying that the product was in accordance with industry standards and contractual requirements. Eight of the 11 prime contractors we reviewed periodically tested parts or processes produced by their subcontractors. Prime contractors tested the subcontractors’ parts or processes at either the manufacturing site or the receiving point to determine whether products or processes met contractual specifications. For example, one prime contractor performed mechanical and electrical tests on all materials received from subcontractors to ensure that the materials met contract specifications. If the materials did not meet contract specifications, the prime contractor’s review board, which included the government quality assurance specialist, made a determination concerning the disposition of the materials. Disposition options included using the material “as is” or scrapping it. Seven of the prime contractors that we reviewed tracked and monitored the performance of their subcontractors by establishing performance goals, assessing and rating the subcontractors’ performance, or recommending corrective and preventive actions when subcontractors produced nonconforming parts. The seven prime contractors established performance goals and rated their subcontractors on a routine and periodic basis using various performance metrics, such as product quality and on-time deliveries. For example, one contractor kept track of the number of nonconforming parts provided by each subcontractor in relation to the total number of parts provided. When a subcontractor’s nonconformance rate exceeded the prime contractor’s acceptable goal, the prime contractor placed the subcontractor on probation. In our review of the 15 contracts, DCMA held prime contractors accountable for overseeing their subcontractors’ work by requiring that prime contractors adhered to contract clauses concerning oversight responsibility. When instances of nonconformance were reported through product quality deficiency reports, the DCMA quality assurance personnel and the prime contractor determined if the deficiency was due to contractor nonconformance and assigned responsibility for corrective action. For the 15 contracts we reviewed, one deficiency was determined to be the responsibility of the prime contractor and DCMA held the prime contractor accountable for the part. During our review, service officials provided us with examples of other contracts in which nonconforming parts reached end users. DCMA followed its procedures for evaluating the causes of these nonconforming parts. The reasons for the nonconformance varied for each part. Most of the 15 contracts we reviewed included Federal Acquisition Regulation clause 52.246-2, Inspection of Supplies—Fixed Price, which states that the prime contractor shall tender to the government for acceptance only supplies that have been inspected in accordance with the inspection system and found by the contractor to be in conformity with contract requirements. This contract clause also states that the contractor is not relieved of its oversight responsibility when government quality assurance over subcontractors is required. The remaining contracts included other quality clauses or did not specify quality requirements because the contractors had quality systems that had been previously approved by the procuring contracting officers. DCMA also performed reviews of subcontractors’ certifications to the prime contractor that the parts and processes produced by the subcontractor were manufactured in accordance with the contract requirements. DCMA quality assurance specialists periodically reviewed third-party certifications and contractors’ documents related to site visits, receiving inspections, and other oversight of subcontractors. These reviews provided a technique for DCMA to determine whether the contractors’ quality assurance systems were adequate for oversight of subcontractors. For example, at one contractor location, the quality assurance specialist obtained copies of certifications that steel was produced and heat-treated in accordance with standards. The specialist also reviewed copies of test records maintained by the contractor during the production process. After parts are provided to end users within military units, such as Army battalions and Air Force squadrons, instances of nonconforming parts are reported through product quality deficiency reports. End users issue product quality deficiency reports to identify deficiencies in parts that may indicate nonconformance with contractual or specification requirements. When the end user identifies a nonconforming product, the user issues a product quality deficiency report that is sent to the applicable DCMA contract management office and distributed to the quality assurance specialist responsible for overseeing the contractor that produced the item. The quality assurance specialist notifies the contractor of the report. The contractor may request that the part be returned to its facility for testing to determine whether the problem that has been identified by the user can be duplicated. The DCMA quality assurance specialist and the prime contractor also determine if the deficiency was due to contractor nonconformance with contract requirements and assign responsibility for corrective action. DCMA writes the final disposition of the product quality deficiency report based on the results of the tests completed by the contractor and the assessment of who was responsible for the deficiency. In those cases where the deficiency was determined to be the responsibility of the prime contractor, DCMA held the prime contractor accountable for the part. If the cause of the deficiency was tied back to a subcontractor, DCMA held the prime contractor responsible for correcting the deficiency and ensuring that the subcontractor’s processes were modified to correct the cause of the nonconformance. During our review, service officials provided examples of other contracts in which nonconforming parts reached end users for various reasons. DCMA and prime contractors followed their procedures in evaluating causes for the nonconformance. For example, in March 2004 one prime contractor was notified that five wiring harnesses, manufactured by one of their approved subcontractors, were defective. Investigations showed that the prime contractor’s quality oversight system did not detect the problem because its personnel were unfamiliar with drawings, specifications, or electrical wiring harness fabrications. After the problem was identified, the prime contractor provided training related to drawings and proper manufacturing processes to its quality assurance personnel as well as subcontractor personnel. However, according to the prime contractor, the subcontractor still could not produce the wiring harnesses correctly. As a result, the prime contractor selected another subcontractor to manufacture the wiring harnesses. In another case, the contract was for a survival kit that included critical safety items. The nonconformance related to an O-ring lubricant that was allowing oxygen pressure to be released prematurely. Based on the deficiency report, the contractor began using another lubricant for the O-ring that eliminated the problem. For this contract, DCMA recommended that government source inspection be added at the subcontractor level because no prior government source inspection was required. Yet in another example, an axle component manufactured by a subcontractor broke on an aircraft landing gear because the dimensions of the axle component were incorrect. Incorrect dimensions resulted from the subcontractor’s improper grinding process; yet, the subcontractor never reported the discrepancies to the prime contractor. Since the incident, the prime contractor has placed the subcontractor on probation within the prime contractor’s quality approval system. The subcontractor will remain on probation until an audit is performed by the end users of the axle to verify that all corrective actions are in effect. DCMA and the prime contractors we reviewed utilized a number of processes to provide quality assurance oversight over the production of spare parts for the military. The processes included conducting physical inspections of parts produced by contractors, reviewing prime contractor’s processes, evaluating potential subcontractors for placement on an Approved Supplier List, requiring certifications of parts and processes, testing parts and processes, and tracking and monitoring subcontractor’s performance. These processes are founded upon contractual requirements, DCMA policies, and industry standards for quality assurance. In addition, there are enforcement procedures that DCMA uses when nonconforming spare parts reach end users. However, despite these quality assurance controls, some risk still exists. For example, while we did not identify any major deficiencies from the contracts and practices we reviewed, service officials provided examples of nonconforming parts related to contracts not included in our review that reached end users for various reasons. Furthermore, given the vast number of contracts and contractors involved in providing spare parts to the government, we recognize that the risk of nonconforming spare parts reaching end users exists. Compliance by contractors, DCMA, and other DOD agencies with established internal controls helps mitigate against this risk. In written comments on a draft of this report, DOD provided one technical comment, which we incorporated as appropriate. DOD did not provide any additional comments. DOD’s written comments are reprinted in their entirety in appendix III. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Director of the Defense Contract Management Agency; the Director of the Defense Logistics Agency; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8365 if you or your staffs have any questions concerning this report. Major contributors to this report are included in appendix IV. To address our objectives, we judgmentally selected for review 11 contracts awarded to 11 prime contractors. During the course of our review, Defense Contract Management Agency (DCMA) and the prime contractors provided four additional contracts awarded to some of these same contractors that had nonconforming parts that reached end users, bringing the total number of contracts we reviewed to 15. Given the small number of contracts we reviewed, our results cannot be used to make inferences about the entire population of contracts requiring government quality assurance. We included the following kinds of contracts as candidates for our study: contracts, purchase orders, basic ordering agreements, delivery orders against existing contracts, and contract modifications for existing contracts. We included contracts that were large and small dollar value, from DCMA’s East and West regions, and from the Army, Navy, Air Force, and the Defense Logistics Agency (DLA). Eleven of these contracts were judgmentally selected: three from information provided by the Navy and Air Force on nonconforming parts and 8 from DCMA. To select the eight contracts, we obtained a query from DCMA’s Mechanization of Contract Administration Services system of contracts for October 1, 1999, through September 30, 2003. Because of the large number of contracts in the database and because we wished to examine recent contract quality assurance oversight practices, we sorted the query to only identify contracts for fiscal year 2003. From this query, we sorted the contracts into four groups according to whether they were associated with the Army, the Navy, the Air Force, or the Defense Logistics Agency. We randomly sampled contracts from each of these four groups. Then, we judgmentally selected a subset of contracts from our random samples in such a way to obtain a set of eight contracts that spanned the various military commands and DCMA’s East and West regions. The breakout of the eight contracts included three Army, two Air Force, two Navy, and one Defense Logistics Agency. To assess the reliability of data from DCMA’s Mechanization of Contract Administration Services system we (1) performed electronic testing of required data elements, (2) reviewed existing information about the data and the system, and (3) interviewed agency officials knowledgeable about the data. We determined DCMA’s Mechanization of Contract Administration Services system data to be reliable for the purposes of our review. To assess whether DCMA provided quality assurance oversight and enforcement over its spare parts prime contractors in accordance with established policies, procedures, and guidance, we compared contract quality assurance provisions and requirements to oversight actions performed by DCMA for the 15 contracts. Specifically, we reviewed the Federal Acquisition Regulation, the DCMA One Book, and the contracts to determine quality assurance oversight responsibilities and enforcement actions available to assure contractor compliance. We compared these policies, procedures, and contract requirements to the three types of inspections performed by DCMA quality assurance specialists to assess if DCMA provided appropriate oversight. For each of the contracts, we met with officials at the DCMA contract management offices identified on the contracts to determine quality assurance oversight actions performed by DCMA personnel. As part of this assessment, we determined which of the three types of inspections were performed for each contractor. We sent letters to officials at the DOD contracting offices to obtain and review documentation related to the pre-award process, quality assurance requirements, and the contracting officers’ interaction with DCMA prior to contract award. In assessing whether DCMA used enforcement actions, we reviewed the product quality deficiency reports included in our review to determine if DCMA levied enforcement actions against the prime contractor when necessary. We also reviewed prior DOD and GAO reports related to DCMA’s execution of its quality assurance oversight over prime contractors and DOD’s implementation of its deficiency reporting system. To assess whether prime contractors provided quality assurance oversight over their subcontractors’ work related to producing spare parts and followed industry standards and contract requirements, we identified prime contractor quality assurance oversight actions performed over subcontractors. We reviewed Aerospace Standard 9100 and prime contractor quality manuals to determine requirements for establishing contractor quality management systems and ensuring that their subcontractors are providing quality parts. We visited and interviewed representatives at the prime contractor locations as shown in appendix II, and determined whether the prime contractors used subcontractors. For those that used subcontractors, we identified the level of quality assurance oversight performed by these prime contractors over the subcontractors. We discussed whether the prime contractors performed supplier ratings of their subcontractors, tested parts or processes provided by their subcontractors, or conducted site visits at their subcontractors’ facilities. At the prime contractor facilities, we observed the prime contractors’ processes for manufacturing or repairing parts to determine the quality assurance performed by the prime contractor throughout the production process. To assess how DCMA held prime contractors accountable for the work of their subcontractors, we reviewed the 15 contracts to determine if the contracts included clauses holding the prime contractor responsible for the spare parts provided to the government and whether instances of nonconformance had occurred. For the reported instances of nonconformance, we looked at whether DCMA held the prime contractor responsible for correcting the nonconformance and what types of actions were performed by DCMA. We also reviewed the Federal Acquisition Regulation to determine contract clauses that require the prime contractor to ensure that parts furnished to the government conform to contract requirements. We reviewed the contracts to determine if they included clauses that the prime contractor was responsible for the spare parts being provided. We also visited or obtained information from representatives at the following organizations: U.S. Air Force, Office of the Assistant Secretary, Contracting Operations Division, Rosslyn, Va.; U.S. Air Force Materiel Command, Wright Patterson Air Force Base, Ohio; U.S. Army Materiel Command Headquarters, Fort Belvoir, Va.; U.S. Army, Aviation and Missile Command, Redstone Arsenal, U.S. Army, Communications Electronics Command, Fort Monmouth, NJ; U.S. Army, Research Development and Engineering Command, Armament Research, Development and Engineering Center, Rock Island, Ill.; U.S. Army, Research Development and Engineering Command, Edgewood Chemical Biological Center, Rock Island, Ill.; U.S. Army, Secretary of the Army for Acquisition, Logistics, and Technology, Arlington, Va.; U.S. Army, Tank Automotive and Armaments Command, Warren, Mich.; U.S. Navy, Naval Air Systems Command, Patuxent River Naval Air Station, Patuxent River, Md.; U.S. Navy, Office of the Assistant Secretary, Research, Development, and Acquisition, Washington, D.C.; U.S. Defense Logistics Agency Headquarters, Fort Belvoir, Va.; U.S. Defense Contract Management Agency Headquarters, Alexandria, Va.; Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Alexandria, Va.; and Aerospace Industries Association, Arlington, Va. We performed our review from November 2003 through October 2004 in accordance with generally accepted government auditing standards. In addition to the individual named above, Connie W. Sawyer, Jr.; Tracy Whitaker; Leslie West; Renee McElveen; Minette Richardson; Kenneth Patton; Sidney Schwartz; and Douglas Cole made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
In the 2004 Defense Appropriations Act, Congress mandated that GAO examine and report on the oversight of prime contractors by the Department of Defense (DOD) and the oversight of subcontractors by the prime contractors. Contract quality assurance oversight is intended to assess whether contractors are capable of and are providing supplies or services that meet contract quality and technical requirements. Providing effective oversight is challenging. DCMA recognizes that the risk of nonconforming parts reaching end users exists, given the diversity of contracts, parts, and products used to meet weapon systems requirements and uses a risk management process to guide its efforts. For fiscal year 2003, government quality assurance oversight was required for approximately 273,000 contracts. GAO determined (1) whether DOD provided quality assurance oversight and enforcement over its spare parts prime contractors, (2) if prime contractors provided quality assurance oversight over their subcontractors, and (3) how DOD held prime contractors accountable for overseeing the subcontractors' work. To address these objectives, GAO judgmentally selected and reviewed 15 contracts awarded to 11 prime contractors by the services and the Defense Logistics Agency. In commenting on a draft of this report, DOD provided one technical comment, which GAO incorporated as appropriate. GAO's review of the 15 contracts showed that quality assurance personnel within the Defense Contract Management Agency (DCMA)--DOD's primary organization for providing quality assurance oversight--generally followed established policies, guidance, regulations, and contract requirements in performing oversight and enforcement over spare parts prime contractors. This oversight ranged from conducting physical inspection of parts, such as testing the measurements and functions of a part to evaluating contractor production processes to observing the outer appearance and counting the number of parts for compliance with contract requirements. When one of the prime contractor's processes and another contractor's parts did not meet contract requirements, DCMA used its enforcement system by issuing requests for corrective action by the prime contractors. GAO found that the 11 prime contractors reviewed provided quality assurance oversight over their subcontractors' work in accordance with industry standards and contract requirements. The contractors used at least two and up to four methods in providing quality assurance oversight over their subcontractors. These methods included evaluating potential subcontractors for placement on an Approved Supplier List, requiring certifications of parts and processes, testing parts and processes, and tracking and monitoring subcontractor's performance. The primary methods of oversight were evaluating subcontractors for placement on an Approved Supplier List and requiring certifications that parts and processes conform to contractual specifications. Establishing an Approved Supplier List served to identify subcontractors capable of producing needed parts or processes in accordance with industry standards and contractual specifications. In GAO's review of the 15 contracts, DCMA held prime contractors accountable for their subcontractors' work by requiring that the prime contractors adhere to contract clauses concerning oversight responsibility. Most of the contracts included either clauses stating that the prime contractor shall provide supplies that conform to contract requirements or clauses related to other quality requirements. When nonconformance was reported, DCMA quality assurance personnel and the prime contractor determined if the deficiency was due to contractor nonconformance and assigned responsibility for corrective action. GAO identified one deficiency from the 15 contracts that the prime contractor was responsible for and DCMA held the prime contractor accountable for the part. While GAO did not identify any major deficiencies from the contracts and practices it reviewed, GAO recognizes that the risk of nonconforming spare parts reaching end users exists. Compliance by contractors, DCMA, and other DOD agencies with established internal controls helps mitigate against this risk.
Over the past 50 years, as a result of producing tens of thousands of nuclear weapons, DOE’s facilities have also produced radioactive and other toxic substances that pose potential health threats to DOE’s workers and the communities located nearby. These substances include the radionuclides uranium, plutonium, and cesium; toxic metals; organic solvents; and chlorinated hydrocarbons. Epidemiological research—research on the incidence, distribution, and control of disease in a population—provides a scientific evaluation of the health effects of exposing workers and the public to such potentially harmful materials. Such research uses health, exposure, environmental monitoring, and personnel records to analyze health effects and evaluate methods to protect people and prevent harm. As such, epidemiological research is essential to a comprehensive occupational and environmental health program. DOE and its predecessor agencies have a long history in epidemiological research, starting with studies of the survivors of the atom bomb. In the past, much of this research was conducted by DOE or its contractors in secret and concentrated on the correlation between the rates of cancer-related deaths of workers at DOE’s nuclear weapons complex and their exposure to ionizing radiation. A number of separate mortality studies—studies of death rates—have been conducted on approximately 420,000 workers over the past 30 years. However, because the records that researchers needed to study the health effects of working in DOE’s facilities were maintained differently at each facility and were difficult to locate, the types and quality of epidemiological research that could be conducted were limited. To alleviate these problems and facilitate epidemiological research on the health effects of exposure to radiation and other hazards, the Secretarial Panel recommended that DOE continue developing CEDR as a comprehensive repository of data on its workers. In addition, to break down what was perceived as “a wall of secrecy” and to help establish the credibility of and maintain independence in the conduct of DOE’s epidemiological research, the Secretarial Panel recommended opening this research and its supporting data to external investigation and scrutiny. Among other things, the Secretarial Panel recommended that DOE execute a memorandum of understanding with the Department of Health and Human Services (HHS), making HHS responsible for long-range, analytic epidemiological studies, while DOE remained responsible for descriptive epidemiology. As a result, much of the epidemiological research on DOE’s facilities is now managed by HHS. Within HHS’ Centers for Disease Control and Prevention, which implemented this memorandum of understanding, the National Institute for Occupational Safety and Health was made responsible for occupational health research (i.e., research on workers employed by DOE and its contractors), while the National Center for Environmental Health was made responsible for research involving the environment, including communities near DOE’s facilities. The Secretarial Panel also called for greater outside scrutiny by recommending that the National Academy of Sciences (NAS) play a key role in overseeing and monitoring the development of CEDR. In response to the Secretarial Panel, as well as a concurrent request from DOE to provide general scientific advice on the status and direction of DOE’s epidemiological programs, NAS established a Committee on DOE Radiation Epidemiological Research Programs. In 1990, this committee issued a report making a number of recommendations about access to data for researchers outside DOE, the types of data to be included in CEDR, and its future development. The report also noted that use of CEDR will depend on ease of access to the information it contains and researchers’ perception of its value. Beginning in 1990, a DOE contractor facility, the Lawrence Berkeley Laboratory, in Berkeley, California, constructed a prototype, known as preCEDR, to serve as the basis of CEDR. In 1992, DOE made data available through this system. In August 1993, DOE published a catalog of data available in CEDR to assist current and potential users in identifying data sets for potential use and to provide instructions on how to obtain access to these data. Through fiscal year 1994, DOE had received $14.35 million in appropriations for CEDR, of which it had spent $9.45 million for CEDR and related expenses and redirected the remaining $4.9 million to other activities. CEDR is budgeted at $1 million for fiscal year 1995, of which $500,000 was funded as of February 1995. DOE does not have available the uniform demographic, exposure, medical, and environmental data that would make CEDR a comprehensive and valuable epidemiological resource for independent researchers. The Secretarial Panel recommended in 1990 that DOE define a minimum set of data necessary for epidemiological research and routinely maintain and collect these data at all DOE facilities. As part of this effort, in May 1992 DOE requested that each of its facilities, within 3 years, complete an inventory of 123 specific types of records that the Department believed were important for conducting epidemiological studies. We reported on this and other DOE efforts to manage records in a May 1992 report. DOE officials told us that when completed, this records inventory would be included in CEDR and would more easily identify for researchers where these specific types of records are located. Meanwhile, DOE is waiting for its facilities to complete their records inventories, which may take until 1996, before it takes steps to routinely collect and maintain the types of records it has already identified as important. In addition, the NAS committee stated that CEDR should be capable of supporting many kinds of epidemiological studies, including long- and short-term health surveillance, monitoring studies, screening programs, and long-term mortality studies. However, as we reported in December 1993, DOE probably will not establish a comprehensive health surveillance program until at least 1998. Such a program would standardize the documentation of workers’ occupational exposures to radiation and other industrial hazards—such as chemicals, gases, metals, and noise—and could identify trends in workers’ illnesses and injuries that might be related to these exposures. Until such a program is in place, the comprehensive data on health effects and exposure needed for important epidemiological research will not be available for placement in CEDR. Moreover, DOE’s Assistant Secretary for Environment, Safety, and Health told us in October 1994 that standardization of data at DOE’s facilities was a problem that would take several years to resolve. Without the important data necessary to support many types of epidemiological research, CEDR today mainly contains the limited data from DOE-sponsored mortality studies of workers at DOE’s facilities at Oak Ridge, Tennessee; Rocky Flats, Colorado; Hanford, Washington; and elsewhere. Of the 37 data sets in CEDR, 36 contain the retrospective information—data on past incidents—used to conduct these studies. (See app. I.) Some new data will be included when certain ongoing studies are completed. These studies include mortality studies of DOE’s workers at the Idaho National Engineering Laboratory and the Portsmouth Gaseous Diffusion Plant in Ohio; a study of cancer incidence among workers at Rocky Flats by the National Institute for Occupational Safety and Health; and studies from the National Center for Environmental Health, including estimates of the effect of the radiation from Hanford on the air and water in the surrounding area. While adding the results of these studies will make some of the data in CEDR more current, the system will still lack the comprehensive data discussed above that would make it the valuable resource that the Secretarial Panel and NAS recommended. According to many NAS committee members and CEDR users we spoke with, the current lack of comprehensive epidemiological data limits CEDR’s value for research. The Secretarial Panel cautioned DOE that retrospective data would have limited value for future research. Also, members of the NAS committee told us that the data on mortality that CEDR currently contains limit the types of studies that can be done and have minimal value for future research on health effects. NAS noted in its 1994 report that the scope of the data currently in CEDR limits the type of research that can be conducted. The data restrict researchers by defining the groups that can be studied, the variables that can be examined, and the analytic methods that can be applied. Officials at the National Institute for Occupational Safety and Health and the National Center for Environmental Health also stated that CEDR would be of greater value if it contained data on chemical exposures and health effects. These data will not be available until DOE’s health surveillance program is completed. Since CEDR contains only limited retrospective data, researchers who need more information must still locate records at DOE’s facilities, where the records are not consistently maintained. However, despite CEDR’s limited value for health effects research, several NAS experts, current users, and DOE officials believe that it has significant value as a teaching tool for students of epidemiology. DOE has made data from its mortality studies easy for outside researchers to access through CEDR, and thousands of people have accessed the system to see what basic data are available. However, few researchers have used the data for original studies on health effects. In addition, some members of the NAS Committee on Epidemiological Research and some researchers we interviewed noted problems that impair the usability of the data. Difficulties include a lack of data that have not been previously modified by other researchers to meet their specific research needs, data that are hard to work with because they have been edited to protect the privacy of the workers, and data that are not current. In addition, some researchers have encountered problems with the quality of the data, including missing and inconsistent data and inadequate documentation of the studies included. For these reasons, some CEDR users need to review original records at DOE’s facilities but find the records difficult to obtain. For the first time in its history, DOE has made the data used to support its epidemiological research accessible. DOE has created a system that allows researchers easy access to the epidemiological data that were used to conduct its mortality studies, as recommended by both the Secretarial Panel and NAS. In addition to data from past studies, CEDR contains summary information, such as the 1992 annual summary of epidemiological surveillance data from Brookhaven National Laboratory. Potential users of CEDR can obtain basic information about the system’s contents and file structure (but cannot access the actual data) through DOE’s published catalog of available data or via a computer link with CEDR directly or through the Internet. The summaries, which do not provide detailed research data, are available to all Internet users. We were able to access CEDR directly from personal computers using communication software and found the instructions relatively easy to follow. According to the CEDR staff at the Lawrence Berkeley Laboratory, computer logs show that thousands of people have accessed CEDR to find out what basic data are available. To view or obtain the actual data on DOE’s workers, a user must receive authorization from DOE. Getting such authorization is a relatively simple process. The required forms, including confidentiality agreements, are provided in the CEDR catalog. Authorization generally takes about a month. Approved users can obtain data from the Lawrence Berkeley Laboratory via electronic tape or diskette, or through direct transmission if they have specialized equipment. Users we talked with reported no major problems in obtaining data from CEDR. Despite the system’s accessibility, few independent researchers have sought approval from DOE to become authorized CEDR users. In addition, some authorized users have never obtained data from CEDR. DOE provided us with a list of 22 primary users as of September 1994. Some of the users listed, however, were not independent researchers but worked for DOE or its contractors. Some of these users were involved only in loading, testing, and maintaining the system. We identified 13 independent researchers who were primary users and may have obtained data from CEDR. (See table 1.) We confirmed that nine independent researchers had obtained data from CEDR. Three of these users worked on studies funded by the National Institute for Occupational Safety and Health, three worked on university research projects, two conducted research for public health institutes, and one was a private consultant. Researchers using CEDR have encountered a number of problems with the data in the system, limiting the value of these data for their research. Although four of the nine researchers we spoke with found the quality of the data satisfactory for their research purposes, the other five researchers reported the following problems: Original data, not previously edited by other researchers, are not available through CEDR. To protect workers’ privacy, key data elements important for certain research have been removed. The data in the mortality studies are frequently old and have not been updated. Research is hindered by problems with the quality of the data, including missing and inconsistent data and inadequate documentation of studies by prior researchers. It is difficult to conduct research beyond DOE’s initial studies or to fully validate the results, according to many of the researchers we spoke with, because CEDR may not contain data as they were originally recorded at DOE’s facilities. Instead, it generally contains data that have been assembled and edited by prior researchers to answer specific research questions. Some independent researchers using data in CEDR stated that they need the original records to conduct their studies. Two CEDR users conducting studies under contracts with the National Institute for Occupational Safety and Health stated that their research was hampered because the working data sets available in the data base were not original data but had already been edited by prior researchers. Answering new research questions would require obtaining the original records directly from DOE’s facilities. Another CEDR user conducting research for a public health institute told us that the best data for research are the original records found at DOE’s facilities. An official of the National Institute for Occupational Safety and Health, as well as a member of the NAS committee, stated similar views. The extent to which some personal identifiers have been removed from the data in CEDR to protect the privacy of workers has made it difficult for some CEDR users to do more precise calculations or compare records. For example, DOE replaced identifying data elements, such as names and social security numbers, with pseudo-identifiers. DOE also rounded some key dates in workers’ files, such as birth date, hiring date, and death date, if applicable. In contrast, an official from the National Institute for Occupational Safety and Health stated that while the Institute replaces identifying data elements, such as the name and social security number, in data that it releases to the public, it does not truncate dates. Researchers funded by the National Institute for Occupational Safety and Health noted that truncating key dates makes it difficult to do precise calculations of exposure, for which it is necessary to know the exact numbers of days a worker is exposed to a hazard. In addition, replacing identifying data elements makes it difficult to compare various records on workers by, for example, consulting a state or national cancer registry. Consulting such registries is often necessary to obtain a worker’s complete health history. Several NAS committee members and current CEDR users told us that CEDR would be more useful for follow-up studies if mortality data were updated, especially data on those exposed to radiation. The mortality studies included in CEDR were conducted on various workers who were employed between 1942 and 1988 at different DOE facilities. In many of these studies, the most recent mortality data are more than 10 years old. Researchers are unable to follow up on the results of the mortality studies without significant additional work. Researchers we spoke with explained that because the chronic effects of exposure to low doses of radiation may not occur until decades afterwards, workers who have been exposed to radiation should be studied over lengthy periods. One epidemiologist, a member of the NAS committee, stated that unless the workers in a study are monitored until the cause of death has been determined, the results of the study are not conclusive. Other epidemiologists and health physicists from the Centers for Disease Control and some DOE contractors also agreed that the data in CEDR would be more useful if the information on mortality were updated. DOE’s Assistant Secretary for Environment, Safety, and Health said that while she considers it the responsibility of the Department to update these radiation studies, she is not sure that the funding necessary to do this will be available, given the current emphasis on funding research on the occupational health effects of hazardous chemicals rather than radiation. Some researchers working with CEDR have encountered additional problems with the quality of the data. Five primary users we interviewed had encountered missing, inconsistent, or inaccurate data. Measuring exposure was a major problem for these users. Examples provided by the data base manager of a research project sponsored by the National Institute for Occupational Safety and Health included the following: In one file, the researchers identified data on 115 workers that conflicted with other information in the file about the amount of radiation to which these workers had been exposed. The researchers could not determine which data were correct. In another file, researchers found 1,000 people listed as never having been monitored for plutonium exposure. Nevertheless, a date was entered in the field for “first date monitored for plutonium exposure.” The researchers could not tell which information was correct. One CEDR user, who had served on the NAS committee, expressed concern that inexperienced researchers could draw erroneous conclusions on the basis of the data currently in CEDR. In her opinion, DOE should not widely publicize access to CEDR for research until some of the problems with its data have been addressed. In an attempt to identify problems with the quality of the data, DOE is setting up a computer bulletin board for CEDR users to communicate with each other and point out problems they have uncovered. DOE cannot be sure, however, that users will take the time to point out these problems. The Secretarial Panel noted that an important element of epidemiological studies is documentation from the original researcher explaining the study’s methodology, assumptions made, and limitations of the data. While both the Secretarial Panel and the NAS committee recommended that all studies provided to CEDR should be supported with documentation, some researchers using CEDR have found insufficient documentation, making the studies difficult to reconstruct. In one case, a university researcher had to go to the facility that was the subject of the study to resolve problems with the documentation. Researchers using CEDR for the two studies sponsored by the National Institute for Occupational Safety and Health also noted problems caused by inadequate documentation. The staff at the Lawrence Berkeley Laboratory responsible for developing CEDR told us that the researchers who provided the studies often did not comply with documentation guidelines. DOE has recently issued revised guidelines in an attempt to improve compliance. However, this measure will not correct inadequate documentation of those studies already in CEDR, and it is unknown whether future data providers will be more responsive to this revised guidance. Because of the limitations of the data in CEDR, some researchers seek to obtain original records from DOE’s facilities, but they report encountering difficulties. Researchers using CEDR for the two studies sponsored by the National Institute for Occupational Safety and Health reported that difficulties in obtaining original records are inhibiting their research. The two researchers told us that when requesting such records from DOE sites, they encountered either uncooperative contractor staff or a lack of adequate staff resources to service their requests. According to DOE’s Assistant Secretary for Environment, Safety, and Health, CEDR is not really intended to be the sole source of data for epidemiological researchers from the National Institute for Occupational Safety and Health, who are likely to require the original records from DOE’s facilities. She was aware that these researchers and others have had difficulties obtaining records from some DOE sites, and she was attempting to work with the contractors to resolve specific problems on a case-by-case basis. Although DOE is adding to the contents of CEDR, doubt remains whether the data base will become the system that NAS and the Secretarial Panel envisioned, containing uniform and useful demographic, exposure, medical, and environmental data. The DOE Assistant Secretary responsible for the CEDR program acknowledged the system’s current limitations and told us CEDR may not become this comprehensive data base. Moreover, DOE has not attempted the long-range planning needed to achieve this vision. The Secretarial Panel had recommended that DOE, under the guidance of NAS, establish a clear statement of CEDR’s intended goals and uses and an orderly plan for implementing the system. Such a plan would define the steps to be accomplished, milestones for completing the work, and resources needed. NAS committee members told us they were not aware of any long-range planning for CEDR. DOE officials with the Office of Epidemiology and Health Surveillance told us they did not have any long-range plans that identified the specific tasks, priorities, time frames, or resources necessary to develop CEDR into a comprehensive data base containing the types of data that NAS had recommended. DOE currently does not know when comprehensive epidemiological data will be available to put into CEDR, how much it will cost to place these data in CEDR, or how many researchers will potentially use these data. DOE is making progress toward standardizing and maintaining data on the exposure of its current laboratory workers to radiation and other hazards that might affect their health. Rather than develop CEDR into a comprehensive data base, the DOE Assistant Secretary said DOE may consider that the data base’s current function of providing the public with access to its existing epidemiological research data is sufficient. In addition, the Assistant Secretary told us in October 1994 that the budget for CEDR—$1 million in fiscal year 1995—will be reevaluated if usage does not increase substantially. Even with increased usage, however, it is not clear whether CEDR is the most cost-effective and practical means of accomplishing the more limited objective of providing access to DOE’s epidemiological data and data gathered under the memorandum of understanding with HHS. Some researchers and others we spoke with suggested that a far less expensive clearinghouse arrangement might meet this need just as effectively. For example, a clearinghouse might simply list the name of the study, the type of data it contained, and the location of the data. These data would remain at the facility where they were collected. CEDR was originally intended both to help dispel public fears about secretive research at DOE and to be a valuable resource for independent researchers studying the long-term epidemiological and other health effects of working at or living near DOE’s facilities. The current system has removed the “wall of secrecy” surrounding DOE’s epidemiological research by making some of the data available to outside researchers. However, as it now stands, CEDR has limited utility as a research data base. DOE is years away from routinely collecting and maintaining the epidemiological data on its workers that are needed to help make CEDR a comprehensive resource. Consequently, CEDR appears to be at a crossroad, and an overall assessment of the system would help DOE better ensure that it is spending its limited funds wisely. If DOE decides to pursue the original vision for CEDR, it cannot be assured of an orderly implementation without a long-range plan that sets forth the required time frames, resources, and costs and takes into account the ongoing efforts to uniformly collect and maintain epidemiological data throughout DOE’s facilities. If DOE decides not to develop a comprehensive epidemiological data base, it could either maintain or abandon the current system. However, maintaining the current system may not be the most practical and cost-effective means of providing the epidemiological data used in DOE’s past studies and those currently being conducted by HHS. Resolving the problems impairing the usefulness of the data in the current system could cost DOE still more. Finally, if DOE decides to abandon the system, continued openness and public access to its health effects research cannot be ensured without identifying alternative means of collecting and disseminating epidemiological data. We recommend that the Secretary of Energy, in consultation with the Secretary of Health and Human Services, the National Academy of Sciences committee, and representatives of the research community, determine whether the Comprehensive Epidemiologic Data Resource is the most practical and cost-effective means of providing epidemiological data for research on health effects. The assessment should cover the costs, benefits, and time frames for including more comprehensive data on health effects in the data base, as well as alternative means of making these data available to outside researchers. If the Secretary determines that the Comprehensive Epidemiologic Data Resource is not the most practical and cost-effective means of compiling epidemiological data, DOE should determine whether continued funding is appropriate. As requested, we provided a draft of this report to DOE for comment. Although DOE did not provide a written response, the Acting Director of the Office of Epidemiology and Health Surveillance did express her views on the report. Overall, she agreed with the problems we identified with the data. However, she maintained that such limitations are inherent in data collected from historical studies and that these data on former workers are nevertheless important and useful. She noted that DOE is making efforts to update and review these data to resolve inconsistencies. She further noted that DOE is required to remove personal identifiers to protect the identities of individual workers. We fully agree that workers’ privacy must be protected. Nevertheless, as we stated in our report, unlike the National Institute for Occupational Safety and Health, DOE truncates (abbreviates or shortens) key dates, an action that can limit the usefulness of the data. Regarding the need to include data on current workers and residents in CEDR, the Acting Director agreed that the information is vital and will be included as new studies are completed. However, while adding the results of these studies will make some of the data more current, the system will still lack the comprehensive data—such as uniform health, exposure, environmental monitoring, and personnel data—that would make it the valuable resource for new research on health effects that the Secretarial Panel and NAS recommended. The Acting Director also expressed concern about our recommendation that the cost-effectiveness of CEDR be evaluated, noting that most of the costs for CEDR have already been incurred. However, these costs are the costs of the present data base, which contains historical information. DOE does not know what it will cost to include the types of health surveillance data in CEDR that the Secretarial Panel and NAS recommended. If CEDR will not include these data, even the costs of maintaining the current system may not be justified. Finally, the Acting Director told us that DOE has added five primary users of the data base since we completed our audit work and has added over 100 files in the last year. We did not verify or evaluate this information. We also discussed the facts presented in this report with CEDR program officials at the Lawrence Berkeley Laboratory, who generally agreed that these facts were accurate. They provided updated information on users of CEDR and data sets in the system, which we incorporated into the report. We performed our review between February 1994 and May 1995 in accordance with generally accepted government auditing standards. In performing this review, we interviewed officials at DOE headquarters, including the Assistant Secretary for Environment, Safety, and Health. We also interviewed the personnel at the Lawrence Berkeley Laboratory, Berkeley, California, responsible for designing and operating CEDR. We spoke with eight of the nine members of the NAS committee responsible for monitoring progress on CEDR, officials at the National Institute for Occupational Safety and Health and the National Center for Environmental Health, and all authorized CEDR users we were able to contact. (See app. II for details of our scope and methodology.) As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of Energy and other interested parties. We will also make the report available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix III. The Comprehensive Epidemiologic Data Resource (CEDR) provides a repository of data that have been used to support epidemiological studies conducted on workers at Department of Energy (DOE) facilities. DOE has funded studies on various groups of workers of DOE or its contractors from the 1940s through the 1990s at facilities involved in the production of nuclear weapons. (See table I.1.) More than one study has been included in CEDR for several of these facilities. As of November 1994, CEDR contained a total of 37 data sets, or logically related data files. Table I.1 lists the 36 data sets covering DOE-sponsored studies on workers; an additional data set covers a 1990 study of atom bomb survivors. Of the 36 data sets in CEDR as of that date, 29 are analytic data sets from past studies at DOE’s facilities and 7 are working data sets. Of the 29 analytic data sets from DOE sites or facilities, 28 are from mortality studies. The remaining set came from a morbidity study that examined the incidence and cause of respiratory disease among workers. Table I.1: Data From DOE-Sponsored Studies on Workers Available Through CEDR as of November 1994 The Linde plant and the uranium facility at Mallinckrodt Chemical Works are no longer operational. We analyzed the contents of CEDR as of November 10, 1994. During our review, DOE was adding new data sets and updating others already in the system. For example, DOE added new analytic data sets from 1994 studies on workers at Fernald, Oak Ridge, Mallinckrodt, Savannah River, and other facilities and updated several working data sets, including data on workers at the Mound plant. In addition to the 36 data sets shown in table I.1, seven new analytical data sets, including two from multiple-site studies, were added. A total of 44 data sets were available through CEDR as of December 31, 1994. More additions and updates are planned for 1995. DOE intends to make all the studies that it funds on exposures in or near DOE’s facilities available through CEDR. DOE officials told us that during 1995 they plan to add new data sets to CEDR and update some of the existing data. Among the new data DOE plans to add are analytic data sets from additional studies of workers at several DOE facilities, a summary data set of epidemiological surveillance data for one or more sites, a data set on workers who painted radium dials, and data on exposures at DOE’s Nevada Test Site. Updates are planned to the working data sets for at least two sites and the dosimetry data for several others. To determine how well CEDR meets its intended objective of being a comprehensive resource, we (1) reviewed recommendations from reports by the Secretarial Panel for the Evaluation of Epidemiologic Research Activities and National Academy of Sciences (NAS) on designing and implementing CEDR; (2) interviewed officials at DOE headquarters—including the Assistant Secretary for Environment, Safety, and Health; the Acting Director of the Office of Epidemiology and Health Surveillance; and the CEDR Program Coordinator—and contractor staff at the Lawrence Berkeley Laboratory concerning the current status of CEDR; (3) reviewed relevant DOE directives, program plans, progress reports, and documentation on CEDR; (4) interviewed eight of the nine members (attempts to contact the ninth member were unsuccessful) of the NAS committee responsible for monitoring and reporting on DOE’s progress on CEDR; and (5) interviewed the officials from the National Institute for Occupational Safety and Health and the National Center for Environmental Health who were responsible for the studies conducted under the memorandum of understanding between DOE and the Department of Health and Human Services (HHS). To determine how accessible and usable CEDR is for outside researchers we also (1) obtained authorization from DOE to become CEDR users and accessed and reviewed various files in the system and (2) interviewed CEDR users about their experiences with the system. We also discussed these issues with the officials on the NAS committee and at HHS mentioned above. We performed our review between February 1994 and May 1995 in accordance with generally accepted government auditing standards. We discussed the facts presented in this report with CEDR program officials at the Lawrence Berkeley Laboratory and officials at DOE headquarters and incorporated their views where appropriate. As requested, we also provided a draft of this report to DOE for comment. Although DOE did not formally respond within the 15 days allowed, the views expressed by the Acting Director of the Office of Epidemiology and Health Surveillance and our evaluation of them are presented in the Agency Comments section of this report. Margie K. Shields, Regional Management Representative Randolph D. Jones, Evaluator-in-Charge Daniel F. Alspaugh, Evaluator Jonathan M. Silverman, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) epidemiological database, focusing on: (1) whether the database functions as a comprehensive repository of epidemiological data about DOE workers and the communities surrounding DOE facilities; (2) whether the system is accessible to outside researchers; and (3) DOE future plans for the system. GAO found that: (1) the current DOE epidemiological database is not as comprehensive as originally envisioned because it lacks uniform data on laboratory workers' exposure to radiation and other hazardous substances and the health of these workers and residents near DOE facilities; (2) although DOE is trying to standardize its data and develop a more comprehensive employee health surveillance program, it will be at least three years before these goals are reached; (3) although the database is easily accessible, few independent researchers have used it because the data are of limited value for new research; (4) data problems include the lack of raw or updated data, missing and inconsistent data elements, and inadequate research documentation; (5) researchers often have to examine original records, which may be difficult to obtain, to get complete information; (6) DOE is uncertain whether the database will ever be as comprehensive as originally envisioned and it has not undertaken specific long-range plans to make it a comprehensive system; and (7) DOE has not assessed whether the current database or an alternative system would be the most cost-effective and practical means of providing researchers with needed data.
With the end of the Cold War, the number, scope, and size of operations other than war have increased dramatically, and the United States has become an active participant in some of these operations. Senior administration officials have testified that multilateral peace operations are an important part of this administration’s national security strategy,albeit not the centerpiece of U.S. foreign policy. These officials have stated that the United States must be willing to act to preserve peace and stability in order to advance and protect U.S. interests in the world. This in turn demands that the United States encourage the successful conduct of multilateral peace operations and, when it is in the United States’ interests, participate in these operations. U.S. military forces have been participating in peace operations for almost 50 years, with limited numbers of personnel. However, as the number, size, and scope of peace operations have increased dramatically in the past several years, the nature and extent of U.S. participation have changed markedly. Recently, the United States has used much larger numbers of combat and support forces to respond to events in a number of locations, including Somalia, Macedonia, Bosnia, Haiti, Rwanda, and Iraq. (See table 1.1) For example, while the United States has approximately 1,100 military personnel committed to the Multinational Force and Observers for the 12-year operation on the Sinai Peninsula, starting in December 1992 it deployed approximately 26,000 to Somalia and approximately 20,000 to Haiti beginning in September 1994. While U.S. participation in peace operations has increased, the size of the armed forces has declined over the past 8 years. From a post-Vietnam War peak of 2.2 million in fiscal year 1987, the active armed forces have been reduced to an authorized level of 1.5 million in fiscal year 1995. Peace operations tend to be sustained rather than short-term operations and sometimes have required extended force commitments from the U.S. military services. U.S. military forces continue to maintain a 12-year commitment to the Multinational Force and Observers on the Sinai Peninsula, a 3-year commitment to Operation Provide Comfort in northern Iraq, and were committed to Operation Restore Hope in Somalia for almost 2 years. Numerous units provide forces during these operations and are rotated to ensure a ready presence. During Operation Restore Hope, the Army rotated forces to and from Somalia approximately every 4 months. The Air Force tends to rotate its aircrew more frequently. In peace operations such as Provide Comfort, Provide Promise, Deny Flight, and Southern Watch, it rotated forces every 3 months. In addition to the forces deployed, additional forces are preparing to deploy or have recently redeployed. The continuing force drawdown has compounded challenges for the U.S. military in responding to extended peace operations. All four services have experienced reductions in personnel and equipment that have forced military planners to reevaluate how the services will respond to peace operations and major regional conflicts (MRC). For example, with the reduction in the number of overseas bases and forward-deployed forces in Europe, the Army and the Air Force have returned part of their Cold War-era European force structure to the United States and decommissioned some units. The forces that remained in the force structure, which once could have responded to peace operations from forward locations, now may have to be augmented by forces from the United States. The former Chairman and Ranking Minority Member of the Subcommittee on Oversight and Investigations, House Committee on Armed Services, asked us to review the suitability of the current U.S. force structure for peace operations. They wanted to know whether the U.S. military had the capabilities necessary to operate effectively in a peace operations environment, while maintaining the capability to respond to two nearly simultaneous MRCs. We did not assess whether the United States should participate in peace operations. We examined (1) the impact that peace operations have on U.S. military forces, (2) force structure limitations that may affect the military’s ability to respond to other national security requirements while engaged in peace operations, and (3) options for increasing force flexibility and response capability. To determine the impact of peace operations on U.S. military forces, we held discussions with personnel who participated in recent peace operations. We also reviewed after-action reports and situation reports and conferred with service, unified command, and Office of the Secretary of Defense officials to identify the units involved, their level of participation, the types of capabilities provided, and the problems encountered in providing these capabilities. In addition, we reviewed the before- and after-deployment personnel and equipment readiness reports of some participating units and interviewed (1) officials responsible for the readiness of these forces and (2) some of the forces that participated in these operations. To determine the effect on the Army of participating in peace operations, we reviewed the experiences of combat and support forces who participated in Operation Restore Hope in Somalia and in a number of other smaller operations such as the Multinational Force and Observers in the Sinai. However, we focused our efforts primarily on Operation Restore Hope, the largest Army peace operation deployment to date. We also reviewed the plans for employment of Army forces in Bosnia should a peace plan be implemented. The operations in Rwanda and Haiti took place after we completed the bulk of our work, so we were not able to fully address them. As a means of determining the effects of peace operations on the Air Force, we selected four of the specialized U.S.-based platforms identified by the Air Force as most affected by participation in peace operations, reviewed data concerning their participation, and interviewed aircrew and maintenance personnel involved in the missions. Similarly, we analyzed data and met with military personnel concerning heavily tasked Air Force units based in Europe. We concentrated our efforts on peace operations involving relatively large numbers of Air Force units, such as Operations Provide Comfort in Northern Iraq, Southern Watch in Southern Iraq, and Provide Promise and Deny Flight in Bosnia. For the Navy, we compared pre-Desert Storm Sixth Fleet aircraft carrier deployments in the Mediterranean area with current Sixth Fleet deployments where the U.S. Navy is supporting Operations Deny Flight and Sharp Guard. We also briefly reviewed Navy participation in Haiti and Cuban operations in the Caribbean. We focused on the Marine Corps’ participation in Operation Restore Hope in Somalia since it was the largest Marine Corps participation to date in a peace operation. To determine whether there are force structure limitations that may affect the military’s ability to respond to other national security requirements while engaged in peace operations, we held discussions with the Office of the Secretary of Defense and unified command and service officials, including officials associated with MRC planning. Using the national security requirements in the bottom-up review as our criteria, we obtained data describing the capabilities necessary to respond to a MRC within the initial days of conflict. We then compared this with the capabilities that had recently been used in peace operations and the total number of the same capabilities available in the active force. We also discussed the actions that would be necessary to disengage from a peace operation in order to deploy to a MRC with officials from each of the military services. To identify options for increasing force flexibility and response capability for peace operations, we reviewed pertinent documents and interviewed senior service, unified command, and other Department of Defense (DOD) officials to obtain information concerning proposed initiatives and options. During the course of this review, we did not examine the adequacy of the funding for DOD’s participation in peace operations or the impact of participation on DOD’s planned spending. We are examining these issues as part of a separate request of the Subcommittee on Military Readiness, House Committee on National Security, and will report the results separately. Our review was conducted primarily at Army, Navy, Air Force, and Marine locations, the Office of the Secretary of Defense, and component and unified command headquarters within the United States and Europe. We contacted by telephone any relevant organizations we did not visit, such as the 7th Transportation Group at Fort Eustis, Virginia; the Military Police Center and School at Fort McClellan, Alabama; the 57th Wing at Nellis Air Force Base, Nevada; the 27th Operations Group at Cannon Air Force Base, New Mexico; the 552nd Operations Group at Tinker Air Force Base, Oklahoma; and the 7th Air Command and Control Squadron at Keesler Air Force Base, Mississippi. Our review was performed from August 1993 to July 1994 in accordance with generally accepted government auditing standards. We obtained DOD comments on a draft of this report. Peace operations have affected each of the military services differently. These operations heavily stress some U.S. military capabilities, including certain Army support forces such as quartermaster and transportation units and specialized Air Force aircraft, while having less impact on other forces, such as Army armored combat divisions and general purpose Air Force combat aircraft outside Europe. In the Army, a large percentage of certain support capabilities in the active component have been used for peace operations. Most of these support capabilities are in the reserves and, for the most part, the reserves have not been activated for use in peace operations. The adverse impact on these support forces has been further exacerbated because the Army frequently borrows people from one unit to supplement another that lacks sufficient personnel to deploy and assigns some personnel to the same operation more than once, or to consecutive operations, because of the high demand for their capability. In the Air Force, peace operations have placed considerable stress on the relatively limited number of forces providing specialized capabilities and on forward-deployed units in the European theater. The increased flying hours necessary to support these operations have resulted in extended temporary duty in excess of established goals, increased aircraft maintenance, cannibalization of home station aircraft, and missed training. Peace operations have not been as disruptive to the Navy and the Marine Corps. However, forward-deployed naval forces have experienced increased operating tempo and, in some cases, reduced time to prepare for deployments, both of which have limited the forces’ availability for training. Naval officials point out, however, that in many cases, peace operations have exposed the naval services to unique experiences in joint and coalition operations. Certain kinds of Army combat support and combat service support capabilities, including quartermaster and transportation companies, are critical in peace operations. The need to establish and provide continued infrastructure support for U.S. military forces, coalition forces, and the local population is the key reason support forces are needed in peace operations. The type and amount of support differs with each operation, depending on the mission and the nature of the operating environment. Peace operations often occur in austere locations where there is limited electric power, roads, water, port facilities, and air fields. As such, support forces have played an important role in establishing and sustaining a working infrastructure, not only for U.S. forces but also for coalition forces and the local population. In Somalia, for example, the Army encountered an environment completely devoid of any useful infrastructure and had to refurbish or build even the most basic of facilities. If nation building is part of the military mission, support forces are additionally burdened with tasks such as building schools, hospitals, and local housing and establishing police and other civil administration services. Operational and environmental challenges further tax support forces. In Somalia, for example, the area of responsibility for U.S. and coalition forces consisted of approximately 21,000 square miles in the southern half of the country, with U.S. military and coalition forces dispersed over considerable distances throughout the country. As shown in figure 2.1, Mogadishu is more than 200 miles from Kismayo (a key Army location) and about 200 miles from the Marine base in Bardera, which in turn is about 200 miles from Kismayo. Support forces had to frequently move between these locations to deliver food, water, fuel, and other supplies. To the extent possible, decentralized support operations were established at various locations throughout the country to reduce the time spent moving between locations. In some cases, however, this posed even greater stress on support forces because they had to divide already limited support assets. For example, the 10th Mountain Division’s 710th Main Support Battalion divided some of its water teams so that they could provide water purification capabilities at additional locations. Combat forces also have played a significant role in peace operations. However, because more of these forces are in the active component, a larger number of them have been available for peace operations. Armored combat divisions have had limited involvement. The Army’s capacity for providing unique support capabilities exceeds that of any other military service or nation. Yet, most of these support capabilities are in the reserves and, except for volunteers, the Army has been authorized to draw on reserves for peace operations only once—in September 1994 for the operation in Haiti. Without a presidential decision to call up reserve forces, the Army has had to draw upon the smaller number of active forces and reserve volunteers to meet support requirements. In some cases, nearly all the active units for a particular support capability deployed to a peace operation. For example, 75 percent of the petroleum supply companies in the active force structure deployed to Somalia. Similarly, 67 percent of the medium petroleum truck companies and 100 percent of the air terminal movement control teams deployed to Somalia. Table 2.1 provides a list of selected Army capabilities within quartermaster, transportation, engineering, and miscellaneous support units that experienced heavy deployments to Somalia. To prioritize scarce resources, many of the Army’s active support units are assigned fewer people in peacetime than are required to perform their wartime missions. If the Army’s early-deploying support units were needed for war, the Army would supplement the units with people and equipment from other active and reserve units. After the Army restructured its forces in the mid-1980s, we reported that its goal was to authorize combat units, which are the chief means of deterrence, to be staffed at 100 percent of their wartime requirements and support units to be staffed at an average of 90 percent of their wartime requirements. In discussions with XVIIIth Airborne Corps officials, the most ready and resourced of all the Army corps, we were advised that units deploying to Somalia needed 100 percent or more of their authorized people and equipment in order to meet operational requirements. Most units did not have the people, and many did not have the equipment to satisfy this requirement. For example, almost half of the XVIIIth Airborne Corps’ First Corps Support Command units were authorized 90 percent or less of their authorized people, and several support units were authorized 80 percent or less of their authorized people. Other corps support commands, such as the Third Corps’, which provided initial corps support for operations in Somalia, are resourced at an even lower level than the XVIIIth Airborne Corps. The Army supplemented the personnel-deficient units deploying to Somalia by borrowing from other units throughout the Army force structure. This practice is known as “cross-leveling.” Cross-leveling has occurred at both the division and corps level. For instance, the 210th Forward Support Battalion, an element of the 10th Mountain Division, took people and equipment from the Division’s 46th Forward Support Battalion and the 710th Main Support Battalion before deploying to Somalia. The 710th Main Support Battalion also supported the 46th Forward Support Battalion’s deployment, thereby creating a domino effect within the 10th Mountain Division. According to the 710th commander, the battalion deployed with fewer than all its people and equipment. Thus, the remaining people were burdened to make do with less. People from some units rotated more than once to the same peace operation or deployed to consecutive peace operations and/or participated in domestic relief operations because of the high demand for their particular capability. For example, almost all of the people from the XVIIIth Airborne Corps’ 364th Direct Support Supply Company that deployed to Hurricane Andrew also deployed to Somalia within the next year. Other units within the XVIIIth Airborne Corps had similar experiences. According to Army officials, support personnel from other Army units rotated more than once to Somalia. The 10th Mountain Division, which responded to the Hurricane Andrew relief operation and to Operation Provide Hope in Somalia, also deployed to Operation Uphold Democracy in Haiti in September 1994 to provide the predominant Army force in support of this peace operation. According to Army officials, approximately 40 percent of the participants in the Haiti operation also participated in the Somalia operation less than 1 year ago. Cross-leveling and frequent deployments in turn affect the ability of a unit’s non-deployed elements to meet their operational responsibilities. A combat support group headquarters has considerable responsibility, particularly as part of the XVIIIth Airborne Corps. When the approximately 150 of 180 military personnel from the XVIIIth Airborne Corps’s 507th Combat Support Group Headquarters deployed to Somalia for several months, they left approximately 30 headquarters personnel at Fort Bragg, along with the group’s three battalions, without any additional augmentation. The headquarters was still responsible for (1) supporting the group’s three battalions, (2) supporting the Multinational Force and Observers rotation, (3) conducting logistics operations missions on the installation, and (4) preparing quarterly training briefs to XVIIIth Airborne Corps. In addition, several of the remaining personnel had to participate in two emergency deployment and redeployment exercises and conduct testing and a major briefing for the Army Chief of Staff. In order to cope with the absence of so many headquarter personnel, many operational requirements were decentralized to the battalion level. In some cases, remaining headquarter personnel (1) took on responsibilities typically assigned to more senior personnel, and (2) doubled and tripled workloads throughout the deployment period. Until recently, the President has elected not to activate reserve personnel for use in peace operations. Therefore, only reserve volunteers have participated in most peace operations. This policy has posed particular difficulties because, as shown in table 2.2, many of the support capabilities most heavily relied upon in recent operations reside predominantly in the reserves. The Army relied on many reserve volunteers in the Somalia operation. While Army volunteers have been helpful, the volunteers available are not always the ones with the specific capabilities, equipment, and training required for the peace operation. Furthermore, individual volunteers do not meet the Army’s requirement for units, in which a group of individuals are trained and organized to perform a mission as a cohesive entity. For example, when Army planners needed a postal unit for operations in Somalia, they created a unit from available volunteers. This process proved to be time-consuming, taking 1 month to create a 49-person postal unit. The recent initiative for using reserve volunteers for the peacekeeping operations in the Sinai has been time-consuming due to planning and procedural processes associated with activating approximately 420 reserve personnel. The reserve volunteers will be ready to deploy to the Sinai by January 1995 after completing 3 to 6 months of training. More senior personnel will train longer. While there has been no shortage of volunteers for the current deployment, Army officials are concerned that they will not be able to recruit enough volunteers to continue this on an annual basis. Therefore, the Army is considering the use of volunteers for every third rotation. The Army’s experience in Somalia illustrates the challenges that could lie ahead if the United States chooses to deploy forces to Bosnia or to other peace operations throughout the world. The Army will likely send at least a division-size force to Bosnia if a peace plan is signed. This could have almost three times the impact on the Army as the Somalia operation, which generally required one-third the number of forces designated for Bosnia. Military police units, in particular, have been kept extremely busy as a result of peace operations. In September 1994, 40 percent of the military police combat support companies stationed in the United States were deployed to Guantanamo Bay supporting the Cuban and Haitian refugee operation. Three other companies were deployed to Suriname, Honduras, and Panama, leaving just 13 companies to patrol nine installations in the United States. According to an Army official, this is a problem because many installations require more than one military police combat support company for patrol duties. Because the increase in military police deployments, mostly due to the refugee crisis, has exceeded the number available in the Army’s force structure, Army infantry units have been used to help meet military police deployment requirements. For example, upon completion of their rotation to Guantanamo Bay, military police companies will return home while rifle companies rotate to Cuba. According to an Army official, while rifle companies will undergo 2 weeks of training to perform the military police function, the training will not provide them with the full breadth of skills that military police possess. The Army will continue to face challenges in responding to sizable peace operations if reserve forces are not activated. The need for reserve activation depends on a variety of factors, such as the size of a peace operation and the number of such operations ongoing at one time. For example, Army officials stated that if the United States participates in enforcing a peace agreement in Bosnia, with an Army deployment of approximately 22,000 soldiers, access to the reserve component could be required for the second 6-month rotation because the large support requirement exceeds the number of active forces available in certain support capabilities. According to Army officials, reserve forces would also likely be required if a number of smaller size peace operations were ongoing at one time. On September 15, 1994, the President authorized the Secretary of Defense and the Secretary of Transportation to call to active duty about 1,900 Selected Reserve military personnel in the Army, Navy, Air Force, Marine Corps, and Coast Guard to support operational missions in Haiti. The call-up included reservists in specialties such as tactical airlift, aerial port operations, military police, medical support, and civil affairs. These specialties are those that maintain most of their capabilities in the reserve component. In regard to this activation, the Secretary of Defense stated that DOD “. . . cannot conduct operations involving significant numbers of personnel and amounts of equipment being moved without using the Reserves.” Since Operation Desert Storm, the Air Force has responded to numerous, and often simultaneous, peace operations throughout the world on a sustained basis. While these operations have provided valuable experience in joint and coalition operations, they also have taxed the Air Force’s specialized capabilities and the units that are forward deployed in the European theater, where most recent operations involving the Air Force have occurred. The Air Force’s participation in these operations has resulted in extended tours of duty, missed training, increased maintenance on aircraft, and cannibalization of aircraft. There are some reports that the stresses on personnel are affecting morale and families. The Air Force has used reserve force volunteers to relieve part of the operational burden on these forces. The Air Force’s specialized support aircraft provide reconnaissance, surveillance, command and control, and other capabilities that are often not available from other services or nations. This report focuses on four of these specialized aircraft, all of which (except two E-3B/C aircraft) are based in the United States—the EC-130E Airborne Battlefield Command and Control Center (ABCCC), for command, control, and communications, and on-scene tactical battle management; the EF-111 Raven, for suppression of enemy air defenses; the E-3 Airborne Warning and Control System (AWACS), for surveillance and command and control; and the F-4G Wild Weasel, for suppression and/or destruction of enemy radars. The Air Force has relatively few of these specialty aircraft in the active component, and they are being used in an increasing number of peace operations, most of which require a sustained presence. For example, as shown in table 2.3, in June 1994 more than 40 percent of available E-3 AWACS, EC-130E ABCCC, and active component F-4G aircraft were being used in peace operations. Participation in multiple peace operations by a limited number of specialized U.S.-based assets has resulted in increased flying hours for those aircraft involved. This has led to additional wear on the aircraft and more frequent intermediate and phase maintenance. For example, aircraft in the only F-4G squadron in the active component, the 561st fighter squadron, are undergoing major phase maintenance every 4 to 6 months versus every 7 to 8 months 1 year ago. Similarly, EF-111 maintenance officials noted that maintenance teams now must work longer to achieve desired results over a shorter time span than normally required. In order to support increased peace operation flying hour requirements and maintain the operational effectiveness of forward-deployed forces, the home station has had to share key operational and support personnel with the deployed portion of the squadron. At times, the home station has gone without certain equipment and supplies to ensure that deployed forces can operate effectively. For example, the 7th Air Command and Control Squadron, the only EC-130E ABCCC squadron in the force structure, had to cannibalize home station aircraft and use their parts to support the squadron’s forward-deployed aircraft when parts were not available from other sources. Due to the extended nature of these operations, participating forces periodically rotate their aircrews, maintenance personnel, and aircraft in order to maintain a continuous ready presence in theater and reduce stress on aircraft and personnel. The Air Combat Command has established 120 days as the recommended maximum number of temporary duty days that Air Combat Command personnel should accrue in a year. However, because of the increasing number of peace operations, personnel associated with specialty aircraft have spent an increased number of days on temporary duty, away from their home bases. In 1994, personnel for the EF-111 and the F-4G approached the Air Combat Command’s recommended maximum number of temporary duty days in a year—120. According to one of their senior commanders, the F-4G’s deployment schedule for 1994 indicates that many individuals will be on temporary duty for about 180 days. According to squadron officials, the increased number of temporary duty days has affected the morale of Air Force personnel participating in peace operations and their families. Some Air Force personnel believe that this increase in temporary duty days is contributing to increased instances of divorce and decisions to leave the Air Force, although no direct link has yet been formally documented. Aircrews flying extended hours in peace operations sometimes do not get the opportunity to train to the broad range of skills necessary for maintaining combat efficiency. For example, while deployed in support of Operation Provide Comfort, F-4G aircrews conducted lethal suppression of enemy defenses but were unable to remain proficient in formation take-off and landing events, night intercept operations, and advanced aircraft handling characteristics. In addition, according to squadron officials, aircrews maintained weapons qualifications at minimum proficiency while participating in peace operations. Without this training, aircrews do not meet the technical requirements needed to qualify for participation in a high-threat, combat environment. On a selected basis, wing commanders can waive certain training requirements for aircrew participating in operations that prevent them from completing all required training. According to senior Air Force officials, the number of waivers granted recently has far exceeded those granted prior to Air Force involvement in these sustained operations. During the January through June 1994 training cycle, 30 of the 71 aircrew personnel of the only F-4G squadron in the active component required a waiver for at least one Graduated Combat Capability event. Similarly, 29 of the 61 aircrew in the only EF-111 squadron required one or more waivers for events to which they could not train. Squadron officials attribute most, if not all, of these waivers to extensive participation in peace operations. The Operations Group Commander, to whom the EF-111 squadron reports, considers the events waived to be critical mission areas. According to the commander, if a large number of aircrew personnel are not flying the required number of sorties required by the Air Combat Command, overall squadron and wing combat capability will suffer. While there were no waivers received by E-3 AWACS aircrews for the training cycle ending June 30, 1994, squadron officials said that they still have training concerns. The AWACS Operations Group Commander noted that the quality of the training conducted from home station and/or at exercises is significantly greater than that logged on deployed sorties. However, in general, approximately 50 percent of the aircrews’ training requirements were accomplished on deployed sorties. While the training was completed, the commander believes that the aircrews did not receive the quality training they needed. As a means of ensuring quality training in the future, an Air Combat Command task force is reviewing Graduated Combat Capability training regulations. In addition, according to Air Force officials, the number of deployed E-3 AWACS aircraft will be reduced so that there will be more available at the home station for training. The reduction will be felt in the drug interdiction program. Since 1991, the end of Operation Desert Storm, three peace operations requiring substantial and sustained Air Force participation have occurred in the European theater of operations—Operations Provide Comfort, Provide Promise, and Deny Flight. These operations, combined with reductions in the U.S. Air Forces in Europe’s (USAFE) force structure—from 8.8 to 2.3 fighter wing equivalents—and corresponding squadron relocations, have resulted in many of the same conditions experienced by specialized U.S.-based assets participating in these operations, such as increased flying hours, high temporary duty rates, and missed training opportunities. In addition, because recent peace operations have occurred in parts of the European theater where the Air Force has not maintained a permanent presence, a significant number of USAFE personnel have been required to build and maintain infrastructure from which to base forces. Weapons training deployment facilities in Aviano, Italy, and Incirlik, Turkey, had to be expanded greatly in order to accommodate the large numbers of military personnel supporting Operation Deny Flight and Operation Provide Comfort. The Air Force constructed tent cities in these two locations to provide additional housing and other services for deployed personnel. With the reduction of forward-deployed squadrons in the European theater, considerable portions of some USAFE capabilities have been dedicated to peace operations. For example, USAFE has two F-15E squadrons designed for delivering precision-guided munitions at night in a high-threat environment. For more than a year, about 14 aircraft from both squadrons, which have a combined total of about 48 aircraft, have been participating in Operations Provide Comfort and Deny Flight. The F-15E’s night navigational and targeting system and high resolution radar have been valuable in identifying ground targets during these operations. Similarly, USAFE has one A-10 squadron, which provides close air support and forward air control. Twelve of its 21 aircraft have been participating in Operation Deny Flight for more than a year. According to Air Force officials, although all the squadrons’ aircraft were not involved in the operation at any one time, peace operations affect entire squadrons because they are structured to fight in place or deploy as a whole unit rather than in smaller packages. Recent peace operations in the European theater have also placed a heavy demand on USAFE’s C-130 Hercules, which provides intra-theater airlift capabilities. The Air Force has only one active C-130 squadron in the European force structure, and almost the entire squadron—17 of 19 aircraft—has been participating in peace operations in the European theater. Operation Provide Promise’s missions into Bosnia have required the heaviest use of C-130 assets. The squadron’s capabilities were supplemented by reserve aircraft from the United States; nevertheless, the squadron had to curtail training in certain skill areas in order to fly scheduled airlift missions between bases to deliver supplies and participate in Operation Provide Promise. USAFE, which had primary responsibility for responding to these operations since they have occurred within its area of responsibility, met operational requirements with its own forces as much as possible. This is traditional Air Force practice. Where USAFE did not have the necessary assets (such as the E-3 AWACS) or had shortfalls (such as in C-130s), it sought augmentation from outside Europe. To the extent other USAFE assets could have been augmented with active-duty units from the United States, such as in the case of the F-15E aircraft, some of the adverse impact of participation in these peace operations might have been mitigated. In commenting on a draft of this report, DOD noted that the Air Force has recognized these challenges and is addressing them by relying more on active, reserve, and Guard units based in the continental United States, which have deployed to Operations Provide Comfort and Deny Flight to relieve some of the operational burden. Deploying to peace operations from bases in Europe or the United States has created planning and logistics challenges for the Air Force because essential unit equipment and personnel have to be shared by the forces at the home base and in the deployed location. These split operations have had a significant impact on home bases, which sometimes have had to make due with a reduced number of maintenance and operational personnel and essential unit equipment to ensure that the deployed forces maintain a high state of readiness. Even if a squadron deploys less than half of its aircraft, the effect on the home base is still significant because key operations and maintenance personnel and equipment must deploy to support the aircraft. According to Air Force officials, split operations challenges exist because Air Force squadrons are still structured to fight in place or deploy as a whole unit rather than in smaller packages as they are doing for peace operations. According to squadron personnel, split operations impede squadron-wide communication processes and long-term squadron planning, and tax senior squadron leaders who often have to perform the jobs of their absent colleagues in addition to their own. According to one squadron commander, it is difficult to plan the future vision for the squadron because the squadron’s senior leaders are geographically separated. Split operations create other personnel challenges as well. Operations and maintenance personnel rotate between the home station and the peace operation. For example, according to USAFE officials, aircrews from USAFE’s A-10 squadron deploy to Operation Deny Flight for an average of 6 to 9 weeks and remain at the home station for varying periods of 2, 5, or 7 weeks. Maintenance personnel remain deployed for 90 days. While at home station, personnel must train and attend to squadron administrative responsibilities. According to the squadron commander, this allows personnel minimal time for leave and attending to family responsibilities before rotating again to the peace operation. Many USAFE squadrons participating in peace operations on a sustained basis have found it difficult to attend major training exercises at the same time they are participating in a peace operation. According to squadron and wing officials we talked with, the squadrons do not have enough people or equipment to support the peace operation, home station requirements, and the training exercise concurrently. Because of their participation in peace operations, both of USAFE’s F-15E squadrons have had to reduce their level of involvement or cancel their participation altogether in training exercises. For example, the squadrons were not able to participate in major tactical air combat exercises, such as Maple Flag, a Canadian exercise similar to Red Flag, which would have provided them with realistic combat training. This type of training is particularly important for these F-15E squadrons since they were established in 1993 and have not had the opportunity to participate in a major tactical air combat exercise. While USAFE squadrons have not deployed all their forces to peace operations, the forces remaining at the home station often find it difficult to maintain enough aircraft to conduct home station training. For example, beginning with its initial deployment in July 1993, USAFE’s only A-10 squadron provided 12 of its 21 aircraft on hand to support Operation Deny Flight. Of the remaining nine, two were undergoing phase maintenance inspections at the home station; one was undergoing depot repair; and one was used for spare parts in support of forward-deployed aircraft. Thus, only five of the remaining aircraft were available for pilot training sorties at the home station. Because of the limited number of available aircraft, the remaining aircrews were only able to fly the minimum number of hours needed to maintain mission-ready status. On the occasions when an additional aircraft had to be dedicated to Operation Deny Flight, the squadron did not have enough aircraft available to meet training needs. According to squadron officials, this was also true for USAFE F-15E, F-15C, F-16, and C-130 aircrews. The Commander of USAFE’s A-10 squadron identified four training events that could not be accomplished at Operation Deny Flight because of various restrictions in the operating theater. These events also were difficult to accomplish at home station because of environmental and other restrictions on low-level flight (below 500 feet), target marking, full scale-weapons delivery, and certain types of approaches. Had the squadron not been participating continuously in Operation Deny Flight, it would have had the opportunity to deploy elsewhere for this training. As is the case with certain U.S.-based squadrons, aircrews from Europe-based squadrons participating in peace operations have also had to obtain waivers for training requirements they were not able to satisfy during the last training cycle. According to the squadron and wing officials we interviewed at home stations and deployed locations, pilot proficiency in a low-threat environment is at an all-time high due to the nature of the missions over Bosnia and Northern Iraq. However, proficiency in high-threat, low-altitude mission profiles has suffered and will continue to suffer as long as training opportunities and peace operation mission taskings remain at their present levels. As shown in table 2.4, for example, all of the aircrews in USAFE’s two F-15E squadrons obtained waivers for one or more training events they were not able to accomplish during the 6-month training cycle ending June 30, 1994. Aircrews received waivers in areas such as Night Weapons Delivery, Air Combat Maneuvers, Air Combat Tactics, and Basic Fighter Maneuvers. As mentioned earlier, USAFE’s only C-130 squadron had to curtail training in order to meet its peace operation and normal operational requirements. However, after March 1994, its operational requirements for Operation Provide Promise declined significantly. As a result, C-130 aircrew did not require training waivers for the training cycle ending June 30, 1994. At the height of Operation Provide Promise, squadron aircrew required training waivers for two consecutive periods ending June 30 and December 31, 1993. For these training cycles, 42 and 52 percent of squadron cockpit crew required 102 and 127 training waivers, respectively. Squadron aircrew received training waivers in critical areas such as night vision profiles and assault approaches. In September 1994, the newly appointed USAFE Commander acknowledged that USAFE units were having difficulty accomplishing their training tasks because they are supporting peace operations. He noted that operations such as Deny Flight and Provide Comfort are competing for combat training time and causing combat skills to atrophy. According to the Commander, fighter pilots need to practice intercepts, bomb dropping, and air-to-air combat, yet they do not typically get this experience during the course of a peace operation. He stressed that USAFE can no longer continue to accept degraded levels of training. As noted earlier in this chapter, the Air Force is now relying more on active, reserve, and Guard units based in the continental United States to relieve some of the operational burden. Air Force reserve volunteer participation in peace operations has more than doubled since fiscal year 1991. Reserve forces have participated in such major operations as Restore Hope, Provide Comfort, Provide Hope, and Southern Watch, as well as other smaller international peace operations and domestic disaster relief operations. In some cases, reserves have been needed to meet mission requirements that active forces were unable to fulfill. For example, since there is only one F-4G squadron in the active component and it is participating in Operation Provide Comfort and Southern Watch, reserve F-4Gs have had to provide augmentation to Operation Southern Watch. In particular, the 190th Air National Guard Fighter Squadron deployed to Southwest Asia in support of Operation Southern Watch in December 1993, within a year of returning from another Southern Watch deployment. According to squadron officials, the 190th Fighter Squadron was deployed 12 out of 18 months during this time period. In other cases, reserve volunteers have provided operational relief to active forces. For example, from November 15, 1993, to January 15, 1994, and again during the summer of 1994, reserve A-10 personnel and aircraft from the United States relieved USAFE’s A-10 squadron so that its personnel could attend scheduled training at Nellis Air Force Base. Operational relief for other USAFE aircraft was provided by F-16, KC-135, C-141, and C-5 reserve aircraft from the United States. In addition to providing this operational relief, reserve forces still have had to meet most of their individual and unit training; attend exercises; and satisfy other operational responsibilities for local, state, and federal agencies, such as providing assistance in weather reconnaissance, disaster relief, aeromedical evacuations, and counternarcotics. The majority of C-130s are in the reserves. Given Operation Provide Promise’s extensive C-130 requirements and USAFE’s relatively small number of C-130s, reserve aircraft and personnel were looked to for meeting mission requirements. Initially, reserve aircraft and personnel augmented USAFE’s only C-130 squadron. However, in January 1994, because an increasing number of U.S.-based aircraft and personnel were needed, the Air Force formed another squadron, known as the Delta squadron. This squadron consisted of reserve and active C-130 aircraft and personnel that operated out of Germany. Aircrew and maintenance personnel rotated every 2 to 3 weeks. The reserve deployments allowed the active component C-130 squadron in Europe to reduce its flying hours and subsequently increase its mission capable rates. As of May 1994, volunteer Air Force reservists flew approximately 62 percent of the airlift sorties in support of Operation Provide Promise. However, while reservists generally are willing to participate in these operations, Air Force Reserve and National Guard officials noted that this level of reserve participation in peace operations is affecting the willingness of reserves to volunteer for exercises. As of May 1994, however, the need for reservists to support Provide Promise dropped as operational demands diminished. Certain Navy and Marine Corps units have experienced increased operating tempo and reduced time to prepare for deployments due to their participation in peace operations. The ability to obtain necessary training while participating in these operations is also becoming an increasing concern. However, peace operations have provided the naval services with unique experiences in joint and coalition operations that in many cases may be more valuable than training exercises. The Navy and Marine Corps in peacetime are inherently crisis- and contingency-oriented forces and have conducted peace operations in littoral areas since their creation. Navy and Marine Corps force structure is designed so that the naval services can maintain a forward presence and rapidly respond to crises, as well as the war-fighting requirements of MRCs. The peacetime role of forward-deployed carrier battle groups and amphibious task forces covers the spectrum of military involvement—from single-ship port visits, maritime interdiction and blockades, humanitarian relief missions, and emergency evacuation of U.S. nationals, to major amphibious operations. According to naval officials, in attempting to meet both the requirements of peace operations and normal peacetime presence commitments, naval forces have exceeded established operating tempo standards for forward-deployed forces in the Central Command, European Command, and Pacific Command areas of operation. The officials indicated that this was due in part to participation in peace operations involving Bosnia, Iraq, and Somalia and in part to the reduction in force structure and forward-deployed forces available to respond to the same or greater number of operational commitments. While the Navy and Marine Corps have tried not to extend deployments beyond 6 months, the operating tempo has increased during deployments. This is reflected, for example, by an increased number of steaming days incurred by Navy aircraft carriers operating in the Mediterranean and adjoining seas in 1993 versus 1989 (the year before Operation Desert Shield). Sustained commitments to particular peace operations, such as Operations Sharp Guard and Deny Flight in Europe and Operation Southern Watch in Southwest Asia, require a sustained presence of surface ships and an aircraft carrier in the Adriatic Sea and Arabian Gulf. This often reduces U.S. naval participation in certain exercises and training. For example, in written responses to our questions the Navy stated that several exercises have been canceled in the European and Central Commands’ areas of operation, severely limiting training in anti-submarine warfare, amphibious operations, and command and control. These capabilities would be needed in a major regional conflict. Table 2.5 compares Sixth Fleet aircraft carrier deployments in 1989 and 1993 and shows a decrease in the number of days devoted to training exercises and an increase in the number of days devoted to all other operations. Postponed or canceled training has not always had a negative effect on naval forces, however. Naval officials stress that peace operations provide unique opportunities for realistic joint and coalition experience and in many cases may be better than exercises. For example, naval forces may receive better training by participating in a multilateral peace operation involving maritime and air interdiction, such as Operations Sharp Guard and Deny Flight in Europe, than by participating in a scheduled exercise with one or two other nations. Similarly, Marine support forces in Somalia obtained valuable experience building infrastructure and providing other logistical support to U.S. and coalition forces. If naval forces are pulled out of training required before a major deployment, they have to compress their training period and then work longer hours to catch up when they return to port. Some of the ships that have participated in the Haiti operation were taken out of single-ship basic training, such as damage control drills. The Navy considers interrupting this training less damaging to overall mission effectiveness than taking ships out of intermediate or advanced training that requires operating with more than one unit. Much of the basic training can be done at sea, even while a ship is participating in an operation. As more ships were dedicated to support Cuban migrant interdiction, however, training opportunities decreased because more of a ship’s crew was involved in migrant sighting, recovery, screening, care, and feeding. When the ships return to port, therefore, they have to perform in-port maintenance, training, and many administrative and operational inspections simultaneously to remain on schedule for their next major 6-month deployment. This has resulted in crewmen working longer hours and has left less time for them to spend with their families prior to a major deployment. Naval officials also told us that peace operations are resulting in reduced intermediate training, such as that at instrumented ranges for missile and gun shoots. U.S. European Command officials noted that naval aviators participating in these operations are experiencing many of the same challenges as the Air Force in terms of training and operational tempo. Participation in sustained peace operations and a reduction in forward-deployed forces has also contributed to reduced U.S. naval presence in certain geographic areas where U.S. forces had been able to visit on past deployments. Among the results has been a reduced level of participation in bilateral exercises and training with countries that may not be participating in peace operations and fewer port visits and military-to-military exchanges. Quantifying the effects of this reduction in presence is difficult since the political and diplomatic factors at issue are somewhat intangible. Naval officials have noted, however, that some nations dedicate considerable resources preparing for the opportunity to participate in an exercise with the U.S. Navy. When exercises are canceled, countries do not get the experience operating with technologically superior U.S. systems and therefore may not be capable of doing so in the future should the need arise. Table 2.5 also shows the decrease in the number of days aircraft carriers spent in port during Sixth Fleet Mediterranean deployments in 1989 and 1993. The reduced number of days in port has affected the Navy’s ability to conduct intermediate maintenance on its ships and equipment. According to U.S. Navy officials in Europe, there has been a 20-percent reduction in the Navy’s ability to conduct intermediate maintenance in this theater, which requires time in port. They are concerned that continued delays in conducting intermediate maintenance may degrade equipment readiness and service life, particularly since peace operations tend to expose equipment to more wear and tear than would be expected during normal peacetime operations. According to the Navy, its participation in peace operations has not, thus far, had a harmful impact on its ability to perform other more traditional missions. Thus, naval units have been able to meet a variety of demands by moving within or across command boundaries—such as between the European and Central Commands—in response to emerging crises. The Navy has generally been able to maintain its policy mandating that deployments not exceed 6 months and that the period between deployments be twice as long as the last deployment. The Navy had to break this policy in some cases, however, so that ships could be made available to support Somalia operations. While in 1993 there were 5 of these cases, through September 1994 the Navy had 15 cases in which it had to break this policy. According to the Navy, the 1994 cases were due chiefly to operational requirements regarding Somalia, Haiti, Cuba, and counter-drug missions. The Marine Corps faces similar challenges. For example, a Marine Expeditionary Unit that returned on June 23, 1994, from a 6-month deployment, including 3 months off the coast of Somalia, was sent back to sea in less than 3 weeks to support U.S. operations off the coast of Haiti. According to service officials, the Navy and Marine Corps have not found it necessary to rely upon volunteer reserve forces in peace operations to the same degree as the Army and Air Force. Naval forces are structured for daily peacetime forward presence operations that require a complete range of combat forces and capabilities be readily available for immediate response. As a result, the majority of these forces and capabilities are in the active component. The function of the Navy and Marine Corps reserve is to augment the active component forces. Nevertheless, there are certain capabilities that reside exclusively, or nearly so, in the Naval Reserve and are essential to many peace operations. These capabilities include units and individuals involved in cargo handling, Navy air logistics, medical fleet hospitals, and mobil construction battalions. Recent Navy support to peace operations has included the Naval Reserve in search and rescue and maritime patrol support for Operations Deny Flight and Sharp Guard, as well as construction support for operations in Somalia. According to naval officials, reliance on these limited, yet important, combat support and combat service support capabilities may increase as the Navy’s commitment to future peace operations continues to expand. Marine Corps forces, chiefly from the First Marine Expeditionary Force, supported operations in Somalia from December 1992 through early May 1993. Later, other Marine forces provided offshore support. While Marine Corps forces have participated in a variety of peace operations, their participation in Operation Restore Hope in Somalia represented their largest peace operation commitment. Early in the operation, the Marine Corps provided the predominant number of forces, including initial entry and sustainment forces. At its peak in January 1993, there were over 11,000 U.S. Marine forces in Somalia. However, by February 1993, the U.S. Army gradually assumed the majority of the support responsibilities for U.S. and coalition forces, and the Marine Corps began to redeploy. The deployment of Marine forces to Somalia resulted in certain support units’ devoting a significant percentage of their capability to the operation, leaving minimal support available at the home base for use in other operations. For example, approximately 95 percent of the 1st Marine Division’s Combat Engineer Battalion and half of the Division’s Headquarters Battalion deployed to Somalia. The absence of the Headquarters Battalion required a secondary planning staff, the 11th Marines, to handle division operations until the main battalion returned. While the 11th Marines, which functions normally as an artillery unit, could have handled a contingency similar in size and scope to the riots in Los Angeles or the Northridge Earthquake, it did not have the capacity to orchestrate a response to a MRC, according to Marine officials. Had another conflict occurred while these forces were in Somalia, the Marines would have to have looked to one of the other two Marine Expeditionary Forces to respond. However, since the Marine Corps’ major ground participation was limited to several months and other forces were available for crisis response elsewhere, the operation had a limited impact. DOD generally agrees that recent peace operations have stressed key military capabilities and states that it is already examining various means to reduce lengthy deployments in support of peace operations and operations other than war. DOD further states that high temporary duty rates and heavy use of specialized aircraft are force management issues that have been addressed by better use of worldwide assets, heavier involvement of the reserves, and the purchase of additional and replacement aircraft. We describe DOD efforts to address the stress peace operations have placed on key military capabilities at several points in the report and modified the report based on DOD’s comments and further discussion with DOD officials. DOD disagrees with our characterization of the demand peace operations have placed on specialized Air Force aircraft. It believes that we have painted an inaccurate and misleading picture about the degree to which such Air Force capabilities are devoted to peace operations. Our report clearly states that the aircraft we cite (see table 2.3) were the average number of aircraft available for mission ready training or deployment to a contingency in June 1994 and that the number excluded test aircraft and/or aircraft undergoing depot, phase, or intermediate phase maintenance. We recognize that the Air Force has more aircraft in its inventory than those available at any one time. However, we believe that in evaluating how peace operations affect military capabilities the appropriate focus is the number of aircraft available for use at any one time. As a result of the bottom-up review, DOD concluded that military forces needed for peace operations will come from the same pool of forces identified for use in the event of one or more MRCs. Some of the Army and Air Force forces used in recent peace operations, including certain Army support units such as port and terminal services units and petroleum handling units that exist in small numbers in the active Army and specialized Air Force aircraft, such as the E-3 AWACS, are also needed in the early stages of a MRC. Disengaging these forces from a peace operation and redeploying them to the MRC quickly may be difficult. Also difficult would be obtaining sufficient airlift to redeploy the forces, retraining forces to restore their war-fighting skills, and reconstituting equipment. These difficulties are significant because in the event of a short-warning attack, forces are needed to deploy and enter battle as quickly as possible to halt the invasion and minimize U.S. casualties. In 1993, the Secretary of Defense conducted the bottom-up review, a reassessment of U.S. defense requirements. This review, completed in October 1993, examined the nation’s defense strategy, force structure, modernization, infrastructure, foundations, and resources needed for the post-Cold War era. The Secretary’s report on the bottom-up review outlined the new dangers facing the U.S. interests, chief among them being regional aggression. To deal with regional aggression and other regional dangers, DOD’s strategy is to (1) defeat aggressors in MRCs; (2) maintain overseas presence to deter conflicts and provide regional stability; and (3) conduct smaller scale intervention operations, such as peacekeeping, humanitarian assistance, and disaster relief. To deal with the threat of regional aggression, DOD concluded that it is prudent for the United States to maintain sufficient military power to fight and win two MRCs that occur nearly simultaneously. According to the report on the bottom-up review, while deterring and defeating major regional aggression will be the most demanding requirement of the new defense strategy, U.S. military forces are more likely to be involved in operations short of declared or intense warfare. The forces responding to these other operations will be provided largely by the same collection of general purpose forces needed for MRCs and overseas presence. DOD’s report on the bottom-up review states that if a MRC occurs, DOD will deploy a substantial portion of its forces stationed in the United States and draw on forces assigned to overseas presence missions. Unless needed for the conflict, other forces that are engaged in smaller scale operations like peacekeeping will remain so engaged. If a second conflict breaks out, the bottom-up review envisioned that DOD would need to deploy another block of forces, requiring a further reallocation of overseas presence forces, any forces still engaged in smaller scale operations, and most of the remaining U.S.-based forces. In determining force requirements for the two-conflict strategy, DOD assumed that forces already engaged in peace operations could rapidly redeploy to a regional conflict. In the Fiscal Year 1995 Defense Authorization Act, Congress expressed concern about the bottom-up review and the defense budget. Regarding peace operations, Congress found that U.S. forces are involved in a number of peace operations, there was a possibility of even larger future involvement, and many of the forces participating in peace operations would be required early on in the event of one or more MRCs. Consequently, Congress directed that DOD review the assumptions and conclusions of the President’s budget, the bottom-up review, and the Future Years Defense Program. The review is to consider the various other-than-war or nontraditional operations in which U.S. forces are or may be participating and directs among other things that the report describe in detail the force structure required to fight and win two MRCs nearly simultaneously in light of other ongoing or potential operations. Congress also stated that the President should be willing to increase defense spending if needed to meet new or existing threats. We found that certain Army support forces as well as specialized Air Force aircraft and Marine Corps prepositioned equipment and stocks that would be needed early in a first MRC have been engaged in peace operations. The Army identified 5-1/3 active combat divisions and associated support forces that are needed in the early stages of a MRC. An additional 3-1/3 active combat divisions and associated support forces—follow-on forces—would either be deployed later in a MRC or could provide part of the response for a second MRC. The support units that accompany active combat forces are organized into seven packages. The first three packages, called Contingency Force Pool (CFP) 1-3, support the first 5-1/3 divisions. While the fourth package, CFP 4, does not support the first 5-1/3 divisions directly, it rounds out the theater support that would be required for these early deploying forces. The follow-on 3-1/3 divisions are supported principally by CFP 5-7. Army planners try to avoid using forces designated for early deployment to a MRC for contingencies such as peace operations. Although planners have been able to minimize the use of these forces in peace operations, they have had to use a large portion of some of the Army’s CFP 1-3 support forces in large-scale and/or multiple peace operations because there is a limited number of such forces in the active component. In the Somalia operation, 50 percent of the active support forces used were from CFP 1-3 units. Specifically, 92 percent of quartermaster forces, 69 percent of engineering support forces, 64 percent of miscellaneous support forces, and 65 percent of transportation forces deployed to Somalia were CFP 1-3 units. As shown in table 3.1, certain support capabilities within those areas had an even higher percentage of CFP 1-3 units. Similarly, should a peace plan be signed and U.S. military forces deploy to Bosnia to support the implementation of this plan, the Army likely would need to draw on support forces, including CFP units, to meet support requirements. For example, approximately 64 percent of the total number of forces planned to deploy are support forces, and approximately 14 percent of those forces will likely come from CFP 1-3 units. The Air Force anticipates needing almost all its specialized and unique capability aircraft, such as the EF-111, F-4Gs, E-3 AWACS, EC-130 ABCCC, and F-15E in the early days of a MRC. The Air Force’s experience in Operation Desert Storm documents the early demand for these aircraft. For example, approximately 63 percent of the F-4G aircraft were deployed in support of Operation Desert Storm at the beginning of the hostilities. According to the bottom-up review, some of these aircraft are so important to a MRC’s success and are of such limited number in the active force structure that they are tasked to both MRCs, even in the case of nearly simultaneous MRCs. Recent peace operations have required varying numbers of the Air Force’s specialized and unique capability aircraft on a fairly continuous basis. For June 1994, we calculated that approximately 46 percent of these aircraft were involved in Operations Provide Comfort, Provide Promise, Deny Flight, and Southern Watch. According to DOD officials, participation in the enforcement of no fly zones and other operations that require the forward deployment of U.S. forces can also enhance the ability of the U.S. military to respond quickly to regional contingencies. These officials said that this was the case in Operation Vigilant Warrior in October 1994, where having U.S. aircraft already operating from Saudi Arabia greatly facilitated the initial coalition response to Iraq’s threatened aggression against Kuwait. U.S. naval forces are structured to respond to regional contingencies with their forward-deployed carrier battle groups and amphibious-ready groups, which rotate on a regular basis between home ports and regional theaters. The Navy and the Marine Corps respond to many types of operations, from MRCs to peace operations, with the same forward-deployed forces. Generally, this has not been a problem because of the flexibility and rotational nature of naval forces. However, to respond to recent peace operations in the Caribbean Sea, the Navy has had to use its non-deployed forces, which were training and conducting maintenance in preparation for their upcoming scheduled 6-month deployments. The Marine Corps and the Army have prepositioned equipment and stocks afloat for use in the event of a MRC. The Marine Corps has relied on prepositioned equipment and supplies stored on their Maritime Prepositioned Ships for a quick contingency response capability. Equipment and supplies that the Marines used in Somalia came from 4 of the 13 Maritime Prepositioned Ships that are organized into three squadrons positioned throughout the world. Each squadron is designed to provide enough ground combat and combat support equipment and supplies to sustain about 17,300 Marines for 30 days. The equipment and supplies aboard these ships are also needed to support other conflicts in which U.S. Marine forces are involved. To the extent these ships have been off-loaded to support a peace operation, their equipment and supplies are unavailable to respond to a MRC. Similarly, the Army has prepositioned equipment afloat to facilitate the rapid deployment of a heavy Army brigade. Ships from the Army’s Prepositioning Afloat Program, which contains 12 ships with combat and support equipment and supplies, were recently positioned for use in supporting the Rwanda humanitarian operation. Five of these ships, containing support equipment and supplies, were positioned off the coast of Africa to support this operation if necessary. The need to unload these ships’ equipment and supplies never arose. In early October 1994, all 12 of these ships were sent to Southwest Asia to support U.S. forces responding to Iraqi troop movements. Had the five ships positioned off the African Coast been unloaded to support the Rwanda operation, their supplies and equipment likely would not have been available for use in Southwest Asia. U.S. military forces would encounter numerous challenges if they needed to redeploy on short notice from one or more sizable peace operations to a MRC. The Assistant Secretary of Defense for Strategy and Requirements stated in June 1994 that the United States would “liquidate” its commitments to peace operations in the event of two simultaneous regional conflicts. Discussions with service officials and review of data concerning the types and number of forces committed to peace operations indicate that disengagement from one or more sizable peace operations and redeployment of forces to a MRC on short notice could be difficult. Obtaining sufficient airlift would be one of the primary challenges encountered in redeploying forces from one or more peace operations to a MRC. In order to redeploy ground personnel and equipment from the peace operations, the already limited number of airlift assets flying from the United States to the MRC would have to divert to the peace operation, in some cases pick up personnel and equipment, and take them to the MRC. The Air Force has not yet fully studied the implication of such a redeployment and hence could not quantify the impact of this delay on the Air Force’s ability to meet MRC deployment requirements. Air Force officials did say that it would make a difficult situation even worse. According to Air Force officials, the Air Force’s tactical forces would also encounter an airlift problem in moving from a peace operation to a MRC. While aircraft and aircrews could easily fly from one operation to another, the maintenance and logistics support needed to keep the aircraft flying—supplies, equipment, and personnel—would have to wait for available airlift. Another challenge that would be encountered is that certain Army contingency support forces (such as port handlers, air and sea movement control personnel, and petroleum handlers) needed in the early days of a MRC, would still be needed within the peace operation theater to facilitate the disengagement and redeployment. As a result of our analysis comparing the support capabilities needed in the first 30 days of a MRC with the contingency support capabilities deployed to Somalia, we found that in some cases 100 percent of some of these active component support forces were used in the Somalia peace operation. Had a MRC arisen during this time, immediate access to reserve component forces would have been necessary. According to DOD officials, the Army has recognized this as a challenge and is currently examining this issue as part of the Total Army Analysis 2003, which it expects to complete in mid-1995. According to Navy officials, the response of Navy ships to a MRC would depend more on their overall distance to the crisis location than on the operations they were currently conducting. With some peace operations, however, Navy ships may not be directed to disengage quickly and move to a MRC. A senior Navy official noted, for example, that it took approximately 7 months to resolve a crisis in Liberia in 1990-91 and until that time the amphibious ready group was not directed to participate in Operation Desert Shield/Desert Storm. Each service faces challenges with reconstituting its forces in terms of training, equipment, and supplies in order to deploy directly to a MRC. Army officials have expressed some concern that participating in peace operations may degrade combat unit readiness for combat operations because of the inability to practice certain individual and collective wartime skills. In Somalia, for example, while the combat forces received extensive experience in military operations conducted in an urban environment, they were not able to practice collective training skills. According to 10th Mountain Division officials, in some cases it took approximately 3 to 6 months to bring these skills back to a level acceptable for combat operations once they returned from Somalia. Army officials also noted that while peace operations offered the opportunity to practice and enhance logistic skills, logistics training provided in Somalia did not substitute completely for the training that would result from a prepared training exercise, such as those at the National Training Center. In the latter, the support forces would work with combat forces as they would in high-intensity combat operations. Marine Corps ground forces had similar experiences in Somalia. According to Air Force officials, peace operations tend to degrade the overall combat readiness of Air Force flight crews that participate in these operations on a sustained basis because they often restrict night and low-level flight operations and do not provide experience in other combat skills such as night intercept maneuvers. Similarly, naval aviators find that they lose proficiency in some combat skills, such as air combat maneuvering, through prolonged participation in peace operations. As with the Air Force, naval aviators who participate in these operations on a sustained basis are not as able to get to combat ranges where they can practice their full breadth of combat capabilities. The reconstitution of equipment used in peace operations may also hinder a timely disengagement and redeployment to a MRC. The extensive use of certain equipment, combined with the harsh environmental effects encountered in certain peace operations, has required extensive maintenance before the equipment can be used again. For example, upon their return from Somalia, the 10th Mountain Division’s AH-60 helicopters had to enter depot level maintenance as a result of the harsh desert environment and the extensive use of these helicopters in Somalia. DOD disagrees with our conclusion that participation in peace operations could impede the timely response of U.S. forces to MRCs. It agrees that there are only a small number of certain active support units that are likely to be needed to conduct both peace operations and MRCs. However, it believes that our resultant conclusions reflect a lack of understanding of how U.S. forces would respond to a MRC. Our conclusions in this regard focus on certain critical capabilities that exist in limited numbers, specifically certain Army support units and certain Air Force aircraft. We reached our conclusions through analysis of how these capabilities have been used in peace operations and past conflicts and their planned use in future conflicts. We agree that most combat forces would be readily available to respond to a MRC. In its comments DOD states that, on the basis of the recent response of U.S. forces to the possibility of Iraqi aggression against Kuwait while U.S. forces were engaged in Haiti, it does not see any evidence that significant support unit shortfalls exist. It further states that the participation of certain Air Force aircraft in peace operations in that part of the world facilitated the response to Iraqi movements. Since these events occurred after we had completed our audit work, we were not in a position to analyze them. Participation in large-scale and/or multiple peace operations could impede the ability of U.S. forces to rapidly respond to MRCs because of several factors. First, certain critical support forces needed in the early days of a major regional conflict would also be needed to facilitate a redeployment from the peace operation. Second, airlift assets would have to be diverted to pick up personnel and equipment from the peace operation. Finally, some of the forces would need training, supplies, and equipment before deploying to another major operation. Forces with capabilities that exist in limited numbers in the active Army and would be needed in the early stages of a MRC have been used repeatedly in peace operations. Similar-type units that are not engaged in peace operations may not be able to respond quickly or effectively to MRCs because they are assigned fewer people than authorized and they may have loaned some people to the units engaged in the peace operations, which exacerbates an already difficult situation. Specialized aircraft that exist in limited numbers in the active force structure and their crews are also being used more frequently in peace operations. The Air Force anticipates needing almost all its specialized aircraft in the early days of a MRC. Some forces in each service are missing training and exercises that affect their overall combat readiness and their ability to redeploy directly to a MRC. Numerous waivers have been issued for aircrews that have not been able to complete required training due to the demands of peace operations. Naval forces involved in peace operations are spending almost all their time at sea conducting operations and so have been unable to participate in some exercises and training. Peace operations are also likely to have a long-term impact on the people who participate in them although it is difficult to quantify that impact. In 1994 personnel for specialized aircraft have approached the Air Force’s recommended maximum number of temporary duty days away from home station in a year—120. In the case of the F-4Gs, squadron personnel are likely to exceed the recommended maximum by 50 percent. There are reports that increased temporary duty days for Air Force personnel are affecting their morale and their families and that it is contributing to increased instances of divorce and decisions to leave the Air Force. Naval personnel, unable to perform as much maintenance, training, and operational inspections while at sea, are working longer hours in port and have less time for their families prior to a major deployment. A June 1994 Defense Science Board Report on Readiness notes that the amount of time individuals are away from home has been affected by, among other things, the rapid force drawdown and a higher level of contingency operations. This has increased deployment frequency and placed new strains on personnel. The report further notes that family separation has always been a major, if not the number one, retention variable. There are options available to allow DOD to meet the demands of participation in numerous and/or sizable sustained peace operations on military forces while maintaining the capability to rapidly respond to MRCs. These options have their own advantages and disadvantages and will require choices on the use of the nation’s resources. Although no one option addresses all the problems we have identified, a combination of these options could substantially ease the problems. While there are costs associated with some of these options, we have not examined their magnitude and how DOD might fund them. DOD is currently examining a range of such options. One option involves increasing the availability of support forces for peace operations by maintaining fewer combat and more support forces on active duty. At present, the Army has placed many support functions in the reserve component. For example, many units that open and operate ports overseas are in the reserve component. This capability was placed in the reserve component during the Cold War, and DOD expected that when forces were needed in wartime it would be able to quickly access and deploy these reserve forces. However, many of these forces that are in the active component have been required in peace operations because the Army has not been authorized to involuntarily access reserve units in most peace operations. While the Army maintains limited numbers of certain types of support capability on active duty, it maintains substantial combat capability in the active component. More support forces could be made available for peace operations if the Army maintained fewer combat forces and redirected those resources to maintaining more support forces. According to Army officials, this is one of the issues that is being examined as part of the Total Army Analysis, which should be completed by mid-1995. Alternatively, DOD may be able to increase the number of combat and combat service support forces without decreasing the number of combat forces by making more use of civilian employees. We recently reported that the services use thousands of military personnel in support functions, such as personnel management and data processing, that are typically performed by civilian personnel and do not require skills gained from military experience. We further reported that replacing these military personnel with civilian employees would reduce peacetime personnel costs and could release military members for use in more combat-specific duties. Making greater use of the reserves would ease the burden on Army active support forces and Air Force airlift and combat forces. Authority to call up the reserves rests with the President. There are three provisions of Title 10 of the U.S. Code that provide access to large numbers of reservists, one of which is section 673b—Presidential Selected Reserve Call-Up (PSRC). This section provides access to 200,000 members in the Selected Reserve for up to 270 days and would only require the President to notify Congress that he was making the call-up. DOD policy guidance regarding the use of reserves for peace operations requires that maximum consideration be given to the use of volunteers before involuntary activation is ordered. The President called up approximately 1,900 reservists to support the September 1994 military intervention in Haiti. Prior to that call-up, PSRC had been invoked only once (for the Gulf War) since its 1976 enactment. The reserves were not activated for the operations in Grenada in 1983, Panama in 1989, or Somalia in 1992. According to senior Army officials, a request went from the Army to the Joint Chiefs of Staff for involuntary access to reserves for Somalia. The request ultimately was never presented to the President. An April 1994 DOD report on accessibility of reserve forces notes that using PSRC authority would raise sensitive domestic and foreign policy concerns that require time to resolve before the President could be expected to decide on when large numbers of reservists should be ordered to active duty. Prior to the Haiti intervention, DOD had stated in its April 1994 report on accessibility of reserve forces that the decision to not invoke PSRC in Grenada, Panama, and Somalia supported the perception that PSRC has evolved into a de facto mobilization authority. As DOD’s report on the reserves notes, gaining involuntary access to reserve personnel for any mission is a sensitive matter. A reserve call-up has the potential to disrupt the lives of reservists, their families, and their employers or customers. According to DOD, the assumption of many reservists is that reservists would be called up for service only when vital interests of the United States are threatened. This is based on Cold War experiences and certain post-Cold War contingencies such as Desert Storm. U.S. Army Reserve Command officials advised us of their concern that involuntary use of the reserves for peace operations would be disruptive to reservists’ lives and ultimately could affect the willingness of Americans to join the reserves. The Office of Reserve Affairs, within the Office of the Secretary of Defense, is examining the limits and impediments to volunteerism and how to expand their use. That office has identified several impediments, including statutory requirements involving the lack of some benefits for reservists on duty for less than 31 days, the lack of employer support, and the lack of funds to pay the costs of reservists on active duty, which currently is not included in the annual defense budget, that must be eliminated if DOD is to rely on expanded use of volunteers. DOD reports that it is addressing a wide range of proposals for mitigating these and other impediments. DOD could also use contractors to augment support forces. The Army is already making greater use of contract personnel to provide many of the support services typically provided by its combat service support personnel. In Somalia, for example, the Army used the logistics civil augmentation program, which uses a civilian contractor to provide construction services and general logistic support. This reduced the Army support requirement. The Army has also tasked this contractor with developing a worldwide logistics civil augmentation plan and a specific plan for a potential future deployment to Bosnia. The Bosnia plan describes the military support the contractor personnel can provide and the types of military units it can replace. Use of the contractor entails additional costs that, in Somalia, were paid first from the Marine Corps’ and then the Army’s operations and maintenance budget. In addition, Army officials said that in Somalia, the contractor needed to use Army equipment to perform its tasks, which required taking equipment from Army units. The use of worldwide military assets could ease the strain on military forces. Peace operations had a number of negative impacts on USAFE because USAFE followed its traditional practice of meeting operational requirements with its own forces as much as possible. While USAFE’s two F-15E squadrons and one A-10 squadron were heavily engaged in supporting peace operations, there were several active-duty F-15E and active-duty A-10 squadrons based in the United States that might have been able to ease the strain if they could have taken turns rotating aircraft and personnel to those operations. In commenting on a draft of this report, DOD noted that the Air Force has recognized these challenges and is addressing them by relying more on active, reserve, and Guard units based in the continental United States, which have deployed to Operations Provide Comfort and Deny Flight to relieve some of the operational burden. At present, one of the Navy’s principal missions is to maintain forward presence around the world. Forward presence is also a key component of national military strategy as described in the report on the bottom-up review. However, the extent of forward presence necessary is a matter of judgement for the Navy and the Joint Staff. DOD could change the required level of forward presence to relieve the strain on naval forces. This would require a significant military and diplomatic policy decision. It could also result in reduced crisis response capability and less opportunity to participate in multilateral exercises. The alternative to using defense resources differently is to accept the status quo and so continue to treat peace operations as a secondary mission. The risk of accepting the status quo is that it would continue the strain on the military as a result of its participation in peace operations and could adversely affect the military’s ability to respond to a MRC if one should occur while military forces were engaged in a sizable peace operation or several smaller ones. Whether the risk is acceptable in part depends on the frequency with which the United States engages in sizable peace operations and the duration of these operations. Each operation is different in terms of size, operating environment, and duration. For example, the operation in Somalia required large numbers of ground forces in an austere environment for over a year, while the Rwanda operation required smaller numbers of ground and airlift forces for several months. Estimates for a potential Bosnia deployment call for even larger numbers of ground forces in an austere environment for about 2 years. Other operations, such as enforcement of the no-fly zones over Iraq and Bosnia, have required aviation assets for an extended period but few ground forces. Whether the risk is acceptable also depends on the extent to which the services can mitigate the risks. For example, the services might be able to use civilian contractor logistics support, or use some of the other options we have identified. Ultimately, however, if policymakers believe that the likelihood of U.S. involvement in large scale, extended duration operations is low, the risk may be much more acceptable than if they believe that the likelihood is high. Concerned about the bottom-up review and the defense budget, Congress directed DOD to review the assumptions and conclusions of the President’s budget, the bottom-up review, and the Future Years Defense Program. DOD is to review peace operations and report in detail on the force structure required to fight and win two MRCs nearly simultaneously while responding to other ongoing or potential operations. Consequently, we are not making recommendations regarding reassessing the impact of participation in peace operations in this report. We recently reported on the bottom-up review’s assumptions concerning the broader force structure issues, including the redeployment of forces from other operations to MRCs, the availability of strategic mobility, and the deployability of reserve combat forces. On another matter, however, we believe that because of the Army’s significantly reduced size the staffing of support forces at 10 to 20 percent below their authorized levels needs to be reassessed. Consequently, we recommend that the Secretary of Defense direct the Secretary of the Army, as part of the Total Army Analysis 2003, to reexamine whether high priority support units that would deploy early in a crisis should still be staffed at less than 100 percent of their authorized strength. DOD states that it is addressing the matter we raise in our recommendation as part of Total Army Analysis 2003. If the Army fully assesses the issue of staffing high priority support units as part of the Total Army Analysis 2003, we believe that the intent of our recommendation would be met.
Pursuant to a congressional request, GAO provided information on the impact of peace operations on the U.S. military forces' capability to respond to regional conflicts, focusing on the: (1) force structure limitations that affect the military's ability to respond to other national security requirements while engaged in peace operations; and (2) options available for increasing force flexibility and response capability. GAO found that: (1) peace operations have heavily stressed some U.S. military capabilities, including Army support forces and specialized Air Force aircraft; (2) because there are relatively few support forces in the military's active force, some of these units and personnel have been deployed to consecutive operations, the tempo of operations has increased, and the time available to prepare for combat missions has been reduced; (3) extended participation in multiple or large scale peace operations could impede the services' ability to timely respond to major regional conflicts (MRC); (4) disengaging support units and specialized aircraft from a peace operation and redeploying them to MRC could be more difficult than estimated because some of these units need training and supplies before deploying to another major operation; (5) the options available to DOD to meet the demands of peace operations while maintaining the capability to respond to MRC include changing the mix of active and reserve forces and making greater use of reserves and contractors; and (6) the United States needs to determine the resources it needs and degree of risk it is prepared to take if it wishes to continue participating in sizeable peace operations for extended periods and still maintain the capability needed to rapidly respond to simultaneous MRC.
The Knutson-Vandenberg Trust Fund, as authorized by the Act of June 9, 1930, as amended (16 U.S.C. 576-576b), allows portions of the receipts from timber sales to be deposited into the K-V Fund to be used to reforest timber sale areas. In addition to being used for planting trees, these deposits may also be used for eliminating unwanted vegetation and for protecting and improving the future productivity of the renewable resources on forest land in sale areas, including sale area improvement operations, maintenance, construction, and wildlife habitat management. Reforestation is needed where timber harvests or natural disasters have depleted the existing timber stands. In fiscal year 1997, about $166 million was expended from the K-V Fund for reforestation and related projects. The majority of the K-V moneys—about $115 million in fiscal year 1997—was used to fund direct reforestation activities. In addition to the direct reforestation expenditures, about $51 million was used for costs incurred to support and manage the reforestation program, such as rents, utilities, computer equipment, or the salaries of program support staff. Federal law permits the Forest Service to transfer amounts from the K-V Fund, as well as other Forest Service appropriations, to supplement the Forest Service’s firefighting funds when emergencies arise. The Forest Service is authorized to advance money from any of its appropriations and trust funds to pay for fighting forest fires. The Forest Service is not authorized to restore amounts so transferred. Congressional action is required to restore such funds. The Forest Service’s oversight and management of the K-V Fund and the reforestation program are decentralized. Forest Service headquarters and the nine regional offices establish policy and provide technical direction to forest offices. The forest offices, in turn, provide general oversight to district offices and help the districts plan K-V projects. The district ranger is responsible for overseeing the planning and implementation of K-V projects. Between 1990 and 1996, the Forest Service transferred about $645 million from the K-V Fund for emergency firefighting activities that had not been fully reimbursed. Since these transfers had not been reimbursed, these funds were unavailable for K-V projects. In the past, when such transfers were made, the Department of Agriculture requested and received supplemental appropriations to restore the transferred moneys, generally within 2 years of the original transfer. However, in more recent time, the Department of Agriculture had not submitted a request for a supplemental appropriation to the Congress. It was not until March 15, 1996, that the Department of Agriculture submitted a request for supplemental appropriations to the Office of Management and Budget for the $420 million transferred during fiscal years 1990, 1992, and 1995. After an additional $225 million was transferred from the K-V Fund in 1996, the Congress, in 1997, provided $202 million from the emergency firefighting appropriation as a partial reimbursement of the K-V Fund. At the beginning of fiscal year 1998, the K-V Fund had an unrestored balance of about $493 million. To provide the Congress with the information it needs to consider any future requests for appropriations to restore previously transferred funds, we recommended that the Secretary of Agriculture report to the Congress on the financial status of the K-V Fund. The Department of Agriculture has informed the Congress about the general dimensions of the K-V funding issue on several occasions, and that information has resulted in some replenishment of the K-V Fund. For example, the Fiscal Year 1997 Omnibus Appropriation Bill provided additional appropriations for emergency firefighting, and $202 million was apportioned to the K-V Fund in January 1997. In addition, the Department has begun providing the Congress with information on the K-V Fund balance at the beginning of each fiscal year, expected K-V collections during the year, and expected K-V expenditures so that the impact of future firefighting transfers can be assessed. Although the Forest Service acknowledged that failure to restore the amounts transferred from the K-V Fund would potentially disrupt the K-V program, forest and district offices continued to operate and plan for future reforestation projects as if the transfers had not occurred. Furthermore, the Forest Service had not informed the Congress of the impact that the funding shortfall would have on the agency’s reforestation activities or developed a plan or strategy for reallocating the remaining funds to the highest-priority projects. Although timber receipts of as much as $200 million had been added to the fund annually, the Forest Service will not be able to pay for all of its planned projects, estimated in fiscal year 1996 at about $942 million, unless the moneys transferred from the K-V Fund for firefighting purposes are restored. We recommended that if the administration decides not to forward to the Congress the Department’s request for restoration of the funds transferred for firefighting purposes, or the Congress decides not to restore these funds during the fiscal year 1997 budget considerations, the Secretary of Agriculture should direct the Chief of the Forest Service, by the end of fiscal year 1997, to revise the list of planned K-V projects to take into account the actual balance in the K-V Fund. The Department has not implemented this recommendation and believes that the Forest Service had sufficient funding to meet all K-V requirements for 1998 and that revising the list of K-V projects downward to match the reduced K-V funding would be both speculative and not creditable. The Department added that it would not require such a list until it was certain that K-V funding for the year was inadequate. In that event, it would provide the Congress with a generic description of the types of K-V activities that would be dropped. The K-V Act requires that the K-V Fund expenditures in any one sale area not exceed the amount collected in that sale area. To facilitate the management of K-V projects and the accounting for K-V funds, however, the Forest Service allows each forest to pool its K-V collections for each timber sale into a forest-level fund, commonly called a K-V pool. At the end of each fiscal year, each forest is required to create a balance sheet showing the cash available for its K-V projects, the projected collections from ongoing sales, and the estimated costs for planned projects. The Forest Service does not have the financial management information and controls needed to ensure compliance with the K-V Act prohibition limiting K-V Fund expenditures on individual sale areas to the collections from those same sale areas. Collections are recorded for individual sales, whereas expenditures are managed and recorded in total at the district level rather than by individual sales. By allowing each forest to pool K-V collections without adequate financial controls and information, the Forest Service cannot ensure that trust fund expenditures do not exceed collections for a given sale area. We recommended that the Secretary of Agriculture direct the Chief of the Forest Service to perform, in consultation with the Chief Financial Officer, an analysis of alternatives (including the costs and benefits of each alternative) to obtain the financial data necessary to ensure that the K-V Fund’s expenditures in one sale area are limited to the amounts collected from that area, as required by the K-V Act. The Secretary of Agriculture did not request that the Forest Service analyze alternatives to the sale-by-sale accounting system that would ensure compliance with the K-V Act. The Secretary indicated that he did not believe such an analysis was necessary and that the current Forest Service methods fulfilled requirements of the K-V Act. We continue to believe that the Forest Service’s current information systems and controls do not provide assurance that the expenditures in one sale area do not exceed the collections from that sale area as required by law. The Forest Service collects a certain amount of K-V funds on each timber sale to pay for the costs of supporting the program at all organizational levels. The regions and forests issue guidance that specifies the percentage of K-V funds that should be collected from individual sale areas to support the program at the forest, regional, and Washington offices. The agency’s overall guidance, however, does not explain how individual regions or forests should calculate and limit amounts for program support. If the allocations for support costs are not limited to the amount collected, however, funds available for project expenditures in sale areas could be insufficient. Only one forest we visited during our 1996 review limited its use of K-V funds for program support to the amounts collected for that purpose. For three of the forests, the regions did not restrict their expenditures for program support to the amounts that had been collected, nor did the forests limit the amount spent for program support at the forest level. For example, if a project costs $100, the forest might instruct the district to collect an additional 20 percent of the project’s cost, or $20, to cover the cost of supporting the program. When the forest allocated funds for a project to the district, it withheld funds to cover the forest’s support costs. However, rather than limiting these withholdings—to continue our example—to 20 percent of the project’s cost, or $20, the forest would withhold 20 percent of the total cost ($120) or $24. This method of determining support costs would reduce the amount available for project work to $96, $4 less than the projected need. We recommended that the Secretary of Agriculture direct the Chief of the Forest Service to require all organizational levels to use a standardized methodology for assessing and withholding the support costs for the K-V program that would limit expenditures for program support to the amounts collected for such purposes. The Secretary of Agriculture directed the Chief of the Forest Service to establish a standardized methodology for assessing and withholding program support costs for the K-V program, and the Forest Service formed a task force to recommend what that standardized methodology would be. The task force completed its work in November 1997, and the Forest Service estimates that the corrective action will be fully implemented when the recommended changes become part of the agency’s directives in September 1998. Mr. Chairman, on the basis of the Department of Agriculture’s response to our recommendations, it appears that it has taken positive actions on our recommendations to better inform the Congress about the magnitude of transfers from the K-V Fund for firefighting purposes and the need to establish a standardized methodology for assessing and withholding program support costs for the K-V program. The Department of Agriculture has not implemented our recommendations concerning revising the list of K-V projects downward because of inadequate funding or performing an analysis of alternatives to a sale-by-sale accounting of K-V Fund expenditures. We continue to believe that action is needed in these areas. We will be pleased to respond to any questions that you or the Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the shortcomings in the Forest Service's administration of the Knutson-Vandenberg Trust Fund (K-V Fund), focusing on the: (1) transfers from the K-V Fund that have not been fully restored; (2) effect of unrestored transfers on planned projects; (3) lack of financial information to ensure compliance with the K-V Act requirements; and (4) lack of a standardized methodology for calculating and limiting program support costs. GAO noted that: (1) between 1990 and 1996, $645 million was transferred from the K-V Fund to support emergency firefighting activities that was not reimbursed; (2) to assist Congress in its consideration of any future requests for appropriations to restore previously transferred funds, GAO recommended that the Secretary of Agriculture report to Congress on the financial status of the K-V fund; (3) the Department has begun providing Congress with additional information on the financial status of the K-V Fund; (4) in fiscal year 1997, Congress acted upon that information by providing $202 million to partially repay moneys transferred from the K-V Fund; (5) the Secretary of Agriculture has not directed the Forest Service to revise the list of planned K-V projects to take into account the actual balance in the K-V Fund; (6) although the K-V Act requires that K-V Fund expenditures in one sale area be limited to amounts collected in the same area, the Forest Service does not collect expenditure data on a sale-by-sale basis; (7) GAO recommended that the Secretary of Agriculture direct the Forest Service to perform an analysis of alternatives to obtain the financial data necessary to ensure that the K-V Fund's expenditures in one sale area are limited to the amounts collected from that area, as required by the K-V Act; (8) the Secretary of Agriculture indicated that such an analysis was not necessary and that the current Forest Service methods fulfilled the requirements of the K-V Act; (9) at the time of GAO's 1996 report, the Forest Service did not have a system in place to ensure the consistent handling of program support charges for the K-V program agencywide; and (10) since that time, the Forest Service has completed an analysis of the methodological changes that are needed to standardize the Forest Service's practices for assessing and withholding program support costs for the K-V program and the results of the agency's work should be implemented when the practices become part of the Forest Service's directives in September 1998.
The Parole Commission and Reorganization Act of 1976 established USPC as an independent agency within DOJ. In particular, the act required USPC to develop rules and regulations establishing guidelines for (1) granting or denying an application or recommendation to parole any eligible prisoner, (2) imposing reasonable conditions on an order granting parole, and (3) modifying or revoking an order paroling any eligible prisoner. In addition, the act required USPC to enact other rules and regulations as necessary to carry out a national parole policy. Certain functions of USPC changed after the federal and D.C. governments abolished parole and enacted a new sentencing structure for certain offenses that included a new form of post-incarceration supervision, called supervised release, as discussed below. The Sentencing Reform Act of 1984 abolished parole for federal offenders convicted of crimes committed on or after November 1, 1987, and, under a new sentencing structure for certain offenses, introduced supervised release for these offenders. Generally, federal offenders convicted of crimes committed before November 1, 1987, are eligible for parole and USPC has jurisdiction to determine whether to grant or deny parole for these offenders. Federal offenders convicted on or after November 1, 1987, now receive determinate sentences—a definite term of imprisonment, followed in most cases by a period of supervised release, which may continue for a number of years. Additionally, under the Sentencing Reform Act, a federal court, in imposing a sentence to a term of imprisonment for a felony or a misdemeanor, may include as part of the sentence a requirement that the offender be placed on a term of supervised release after imprisonment, as well as modify, terminate, extend, or revoke a term of supervised release.have jurisdiction for supervision and revocation decisions for these federal offenders subject to terms of supervised release under the new determinate sentencing structure. With the abolition of parole for federal offenders, it was expected that the existing functions of USPC—granting parole, determining and modifying parole conditions, and revoking parole—would apply to a limited and diminishing class of federal Thus, USPC does not offenders sentenced under the old sentencing law who were on or otherwise eligible for parole. Thus, the act provided for the eventual abolition of USPC. The expectation that USPC would only carry residual functions that eventually would disappear, or readily could be assigned elsewhere, changed with the enactment of the National Capital Revitalization and Self-Government Improvement Act of 1997 (Revitalization Act). The Revitalization Act, along with related D.C. legislation, instituted reforms in the sentencing and supervision structure for D.C. offenders, which in many respects were similar to those that the Sentencing Reform Act of 1984 had established for federal offenders. Revitalization Act required the District of Columbia to move to a determinate sentencing structure for certain offenses and abolished parole. Also, it provided for terms of supervised release to follow the determinate sentences to be imposed. Further, it provided USPC with ongoing jurisdiction for supervision and revocation decisions for D.C. Code offenders subject to terms of supervised release under the new determinate sentencing structure. In August 2000, the District of Columbia enacted a determinate sentencing system and abolished parole for D.C. offenders convicted of crimes committed on or after August 5, 2000. These offenders now receive determinate sentences, followed in most cases by a period of supervised release. According to DOJ, during the period of supervised release, the offenders’ behavior is to be closely monitored under conditions that USPC determines are in order to protect public safety and maximize the likelihood of successful reentry into society. D.C. offenders convicted of crimes committed before August 5, 2000, are under the jurisdiction of USPC and are on or are eligible for parole. USPC was last reauthorized in 2013, since it still has offenders under its jurisdiction, including federal and D.C. offenders on or eligible for parole and D.C. offenders on supervised release or serving a prison sentence that includes supervised release. This reauthorization expires on November 1, 2018. Pub. L. No. 105-33, 111 Stat. 712 (1997). Offenders under USPC’s jurisdiction currently include the following: federal offenders on or eligible for parole; other federal offenders, including transfer treaty offenders, military offenders, and certain Federal Witness Protection Program offenders; D.C. offenders on or eligible for parole; and D.C. offenders on supervised release or serving a prison sentence that includes supervised release. Regarding the above offenders, USPC currently has the responsibility for: holding hearings regarding release decisions for federal, transfer treaty, military, and D.C. offenders; making determinations regarding the initial conditions of supervised release for D.C. offenders, managing these offenders’ risk in the community, modifying the conditions of supervision for changed circumstances, discharging offenders from supervision early, and issuing warrants or summons for violation of the conditions; and revoking the release of federal, military, witness protection, and D.C. offenders on parole and D.C. offenders on supervised release. As figure 1 shows that, from fiscal years 2002 through 2014, the total number of offenders under USPC’s jurisdiction declined 26 percent from about 23,000 to about 17,100. Specifically, the number of federal and D.C. offenders on or eligible for parole declined; however, the number of D.C. offenders on supervised release or serving a prison sentence that includes supervised release generally increased. Table 1 in appendix I provides data on the annual variation in these numbers during this period, and the subsections below elaborate on the trends. As figure 2 illustrates, from fiscal years 2002 through 2014, the overall number of federal offenders either on or eligible for parole declined 67 percent, from just about 7,200 to about 2,400 following the abolition of federal parole in 1987. In particular, over the period shown in the figure, the number of federal offenders on parole declined 68 percent from about 4,000 to about 1,300. Similarly, the number of federal offenders eligible for parole declined 66 percent from about 3,200 to about 1,100. Table 2 in appendix I provides the annual variation in these numbers from fiscal years 2002 through 2014 for this population. Figure 3 provides information on the number and composition of federal offenders under USPC’s jurisdiction for 5 of the most recent fiscal years. As the figure illustrates, the overall number of federal offenders under USPC’s jurisdiction declined 27 percent from fiscal years 2010 through 2014 mainly because of the decline in the number of federal offenders on parole. Table 3 in appendix I provides data on the annual variation in these numbers from fiscal years 2002 through 2014. As figure 4 illustrates, from fiscal years 2002 through 2014, the number of D.C. offenders on or eligible for parole declined 74 percent, from about 14,100 to about 3,700 following the abolition of parole for D.C. offenders in 2000. In particular, over the period shown in the figure, the number of D.C. offenders on parole declined 70 percent from about 7,400 to about 2,200. Similarly, the number of D.C. offenders eligible for parole declined 78 percent from about 6,700 to 1,500. Table 4 in appendix I provides the annual variation in these numbers from fiscal years 2002 to 2014. Figure 5 illustrates that, from fiscal years 2002 through 2014, following the introduction of supervised release in 2000, the total number of D.C. offenders on supervised release or serving a prison sentence that includes supervised release increased 606 percent, from about 1,700 to about 12,000 in fiscal year 2011, and then slightly declined but remained above 11,000 through fiscal year 2014. In particular, over the period shown in the figure, the number of D.C. offenders on supervised release increased 600 percent from about 900 to about 6,300 in fiscal year 2011 and then slightly declined but remained above 5,700 through 2014. Similarly, the number of D.C. offenders serving a prison sentence that includes supervised release increased 613 percent from about 800 to about 5,700 in fiscal year 2011 and then slightly declined to above 5,300 through 2014. Table 5 in appendix I provides the annual variation in these numbers from fiscal years 2002 to 2014. According to officials from USPC and criminal justice partner organizations we interviewed, any organization accepting the transfer of USPC’s jurisdiction over D.C. offenders would need to have three key organizational characteristics in place for such a transfer to be feasible. Because no existing entity currently possesses these characteristics, a transfer of USPC’s jurisdiction to another entity is not feasible without altering the characteristics of an existing entity or establishing a new organization. Doing so would pose challenges related to estimating costs and assessing impacts on decision making. According to officials we interviewed from USPC and CSOSA, in order for another entity to feasibly assume USPC’s jurisdiction for D.C. offenders, this entity would need to have the following key organizational characteristics in place: (1) relevant statutory authority; (2) specialized processes, procedures, and personnel; and (3) formal agreements with other organizations concerning decisions for parole and supervised release cases. We found that none of the other 17 criminal justice entities currently involved with D.C. offenders possesses any of the three: Relevant statutory authority. The Revitalization Act specifies different organizations’ responsibilities over D.C. offenders, including USPC’s jurisdiction for parole and supervised release decisions regarding these offenders. None of the other 17 entities we assessed have similar authority. Further, USPC derives its powers from existing statute to subpoena witnesses for parole and supervised release revocation hearings. Thus, any organization assuming USPC’s functions would likewise need relevant statutory authority to do so. Specialized processes, procedures, and personnel. USPC has mechanisms in place, as well as the appropriate expertise, for handling and hearing parole and supervised release cases. No other organization that could potentially assume USPC’s jurisdiction has these same mechanisms in place already. For example, USPC’s standard operating procedures describe the hearing processes and the specific steps hearing officials are required to take before making recommendations to the Parole Commissioners for parole and supervised release cases. Additionally, according to USPC officials, because USPC is the only organization in the federal government that makes parole and supervised release decisions based on federal and D.C. statutes, its personnel have developed the necessary expertise to carry out its responsibilities over time. Formal agreements with other organizations concerning parole and supervised release decisions. USPC has formal agreements concerning its decisions for parole and supervised release cases with other criminal justice partners. Thus, an entity absorbing USPC’s jurisdiction would need to establish these formal agreements anew and stipulate roles. For example, according to USPC officials, USPC conducts many decision hearings for D.C. offenders who have been released outside of the D.C. metro area. In order to conduct those hearings, USPC leverages its formal agreement with U.S. Probation and Pretrial Services, to ensure that it has access to offenders and information on these offenders. Further, according to CSOSA officials, their agency’s understanding with USPC is formalized in an interagency agreement. They noted that any transfer of jurisdiction would require a new, formalized agreement with any new entity accepting USPC’s jurisdiction to ensure ongoing and successful D.C. offender management. Further, according to USPC officials, formal agreements are reinforced with statutory authorities. Thus any entity assuming USPC functions would need authority to enter into formal agreements with partner agencies for the management of D.C. offenders. Given that no entity currently involved in the criminal justice system has the structure in place to absorb USPC’s jurisdiction for D.C. offenders, the transfer of jurisdiction is not feasible without altering the characteristics of an existing entity or establishing a new organization. Altering or establishing a new entity poses challenges estimating costs and assessing impacts on decision making. Altering the characteristics of an existing entity or establishing a new organization with the structure to assume USPC’s jurisdiction could involve an initial outlay of expenditures in order to begin operations. For example, start-up costs could include, among others, costs related to hiring and training personnel; renting or building work space; and establishing processes, procedures, and an infrastructure of technology. Such initial costs could possibly be neutralized by longer term savings attributable to reductions in USPC operations. However, because it is difficult to estimate both the specific start up costs involved and any projected efficiencies resulting from a modification to USPC’s jurisdiction, a net cost effect is difficult to estimate as well. Estimating the dollar amount of start-up costs is challenging on several fronts. We spoke with representatives from the D.C. Office of the Deputy Mayor for Public Safety and Justice, which has oversight of D.C.’s criminal justice agencies and thus could have responsibility for supervising a new entity if the entity were to be housed in the D.C. government. According to these officials, they are not well positioned to generate a cost estimate for creating a new entity for three reasons. First, D.C. has not recently established an entirely new organization and thus these officials had no example upon which to base an estimate. Second, they noted that D.C. had recently consolidated several agencies, and that this process resulted in increased costs. Finally, they explained that estimates of this type are often required to be made years in advance in order to secure the necessary statutory and funding changes from their city council and Congress. Estimating projected efficiencies resulting from a modification to USPC’s jurisdiction is also difficult. This is mainly because USPC would still incur operating costs related to its authorities over federal offenders even after its jurisdiction over D.C. offenders was transferred. In addition, USPC’s operations with respect to D.C. offenders would still need to operate for some amount of time before a new or altered entity would be ready to assume its responsibilities. Thus, there would be some overlap of expenses before funding shifted. It is also important to note that D.C. government organizations that could oversee a new or altered entity already rely on federal funding. For example, the Superior Court of the District of Columbia received about $115 million in funding from the federal government in fiscal year 2014. Additionally, the Public Defender Service for District of Columbia’s Parole Division, which provides representation to D.C. offenders facing revocation before USPC, among other things, received about $40 million in fiscal year 2014 from the federal government to do so. Thus, withdrawing funding from USPC to provide additional federal funds to another organization that already receives federal funding would likely just shift the federal burden. On the other hand, if the new or altered entity could perform USPC’s functions related to D.C. offenders more efficiently and at a lower cost than USPC, then federal savings might be realized. Assessing USPC’s current operational expenses and analyzing where efficiencies could be achieved would require a thorough evaluation. Such an evaluation, and the implementation of any changes resulting from it, could require upfront costs. Thus, it is difficult to estimate whether or not any longer term savings could be achieved after the initial investments to alter an existing entity or establish a new organization. Altering the characteristics of an existing entity or establishing a new organization with the structure to assume USPC’s jurisdiction could initially result in delays in decision-making but the longer term impact is also challenging to assess. For example, the altered or newly created organization would need time to start up and establish formal agreements with its criminal justice partners such as CSOSA. According to USPC and CSOSA officials, this could result in the other entity experiencing potential delays in making parole or supervised release decisions. These officials further stated that such delays could result in increased incarceration and supervision costs, risks of litigation, and threats to public safety. Specifically, according to USPC and CSOSA officials, because offenders cannot be released until the organization processes decisions, delays in decision-making by the other entity could result in, for example, offenders staying incarcerated longer and having higher housing-related costs. These officials also stated that if offenders are under supervision and a decision to revoke their status is delayed, then this could result in more supervision-related expenses. Further, according to USPC and CSOSA officials, when decisions are made outside the statutory timeframes because of delays, offenders could be positioned to file lawsuits, which could result in additional costs related to, for example, defending the organization’s actions. Finally, according to USPC officials, if courts were to rule in favor of those offenders, they could potentially offer early release or reduced sentences to them, which could result in public safety threats. We requested comments on a draft of this report from DOJ, CSOSA, and the D.C. government. They did not provide written comments. USPC and CSOSA provided technical comments, which we incorporated into the draft as appropriate. If you or your staff have any questions about our work, please contact me at (202) 512-9627 or MaurerD@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Table 6 describes the federal and local agencies involved with District of Columbia offenders. We identified these entities by reviewing our prior work on the District’s criminal justice system and other information on the key responsibilities of these federal and District criminal justice organizations. David C. Maurer, (202) 512-9627 or MaurerD@gao.gov. In addition to the contact named above, Joy Booth (Assistant Director); David Alexander; Pedro Almoguera; Willie Commons, III; Emily Gunn; Eric Hauswirth; Susan Hsu; Katherine Lee; and Juan Tapia-Videla made key contributions to this report.
USPC was established in 1976, in part to carry out a national parole policy that would govern the release of offenders to community supervision prior to completing their full custody sentences. USPC's budget is just over $13 million for fiscal year 2015. Over time, changes in laws have abolished parole and introduced supervised release—a new form of postincarceration supervision. As a result, USPC has been reauthorized and has authority to grant and revoke parole for eligible federal and D.C. offenders and to revoke supervised release for D.C. offenders violating the terms of their release. USPC's current authorization is set to expire in 2018. This report addresses (1) changes in the number of offenders under USPC's jurisdiction from fiscal years 2002 through 2014 and (2) the organizational characteristics needed for an entity to feasibly assume jurisdiction of D.C. offenders from USPC, and the feasibility and implications of such a transfer. GAO analyzed USPC data on federal and D.C. offenders from fiscal years 2002-2014—the most recent years for which reliable data were available—as well as DOJ reports on USPC and USPC policies, and determined that the data were sufficiently reliable for our purposes. GAO also discussed with USPC and some of its criminal justice partners the feasibility of transferring USPC's jurisdiction for D.C. offenders and any related challenges. From fiscal years 2002 through 2014, the total number of offenders under the Department of Justice's (DOJ) U.S. Parole Commission's (USPC) jurisdiction declined 26 percent from about 23,000 to about 17,000. Specifically, following the abolition of parole, the number of offenders on or eligible for parole declined 67 percent among federal offenders, and 74 percent among D.C. offenders. However, following the introduction of supervised release, the number of D.C. offenders on supervised release or serving a prison sentence that includes supervised release increased 606 percent from fiscal year 2002 to fiscal year 2011, and then slightly declined through fiscal year 2014. Transferring USPC's jurisdiction for D.C. offenders would require that an entity has three key organizational characteristics to assume this jurisdiction, and altering or establishing a new entity poses challenges. Based on our discussions with officials from USPC and other organizations, including those from the D.C. government, these three key organizational characteristics are: statutory authority for asserting jurisdiction over D.C. offenders; processes, procedures and personnel in place for handling parole and supervised release cases; and formal agreements with other criminal justice organizations for making parole and supervised release decisions. We identified 17 criminal justice entities with the potential to assume USPC's jurisdiction for D.C. offenders; however none currently possesses the three key organizational characteristics. Thus, transferring jurisdiction is not feasible without altering an existing or establishing a new entity, and would pose challenges related to estimating costs and assessing impacts on decision making. GAO is not making any recommendations.
DHS invests in a wide array of complex acquisitions to achieve its national security mission. DHS components and offices sponsor investments to address mission capability gaps and are the end-users of the developed acquisitions. DHS has stated that the Undersecretary for Management, as the Chief Acquisition Officer, is responsible for acquisition policy. The purpose of DHS’s investment review and budget processes are to provide oversight of these major investments. Specifically, DHS established the investment review process in 2003 to help protect its major investments by providing departmental oversight of major investments throughout their life cycles and to help ensure that funds allocated for investments through the budget process are being spent wisely, efficiently, and effectively. In 2005, we reported that this process adopted many acquisition best practices that, if applied consistently, could help increase the chances for successful outcomes. However, we noted that incorporating additional program reviews and knowledge deliverables into the process could better position DHS to make well-informed decisions. In 2007, we further reported that DHS had not fully defined and documented policies and procedures for investment management or fully implemented key practices needed to control its information technology (IT) investments. To strengthen DHS’s investment management capability, we recommended that the department fully define and document project and portfolio-level policies and procedures and implement key control processes. In addition to the investment review process, the DHS budget process serves as the framework for decision making for ongoing and future DHS programs. The framework is cyclic, consisting of planning, programming, budgeting, and execution phases that examine existing program funding and link the funding to program performance to ensure funds are expended appropriately and that they produce the expected results and benefits. The investment review process framework manages investment risk by developing an organized, comprehensive, and iterative approach to identifying; assessing; mitigating; and continuously tracking, controlling, and documenting risk tailored to each project. The investment review process has four main objectives: (1) identify investments that perform poorly, are behind schedule, are over budget, or that lack capability, so officials can identify and implement corrective actions; (2) integrate capital planning and investment control with resource allocation and investment management; (3) ensure that investment spending directly supports DHS’s mission and identify duplicative efforts for consolidation; and (4) ensure that DHS conducts required management, oversight, control, reporting, and review for all major investments. The process requires event-driven decision making by high-ranking executives at a number of key points in an investment’s life cycle. The investment review process provides guidance to components for all DHS investments, but it requires formal department-level review and approval only for major investments—those that are categorized as level 1 or 2 (see table 1). The investment review process has two types of reviews: programmatic and portfolio. Programmatic reviews are held at specific milestones and require documentation and discussion commensurate with the investment’s life cycle phase. These reviews contribute to the investment review goal of identifying investments that perform poorly, are behind schedule, are over budget, or that lack capability so officials can identify and implement corrective actions. Portfolio reviews are designed to identify efforts for consolidation and mission alignment by monitoring and assessing broad categories of investments that are linked by similar missions to ensure effective performance, minimization of overlapping functions, and proper funding. The IRB and JRC are responsible for reviewing, respectively, level 1 and level 2 investments at key milestone decision points, but no less than annually, and provide strategic guidance (see table 2). In addition to requiring department-level review, DHS policy directs component heads to conduct appropriate management and oversight of investments and establish processes to manage approved investments at the component level. The investment review process has three broad life cycle stages, covering five investment phases and four decision points or milestones (see fig. 1). In the preacquisition stage, gaps are to be identified and capabilities to address them defined. In the first phase of the acquisition stage—concept and technology development—requirements are to be established and alternatives explored. In the next phase—capability development and demonstration—prototypes are to be developed. In the final acquisition phase, the assets are produced and deployed. With the high dollar thresholds and inherent risk of level 1 and level 2 investments, IRB or JRC approval at milestone decision points is important to ensure that major investment performance parameters and documentation are satisfactorily demonstrated before the investment transitions to the next acquisition phase. IRB and JRC milestone reviews are not required once an investment reaches the sustainment phase. As designed, knowledge developed during each investment phase is to be captured in key documents and is to build throughout the investment life cycle. Performing the disciplined analysis required at each phase is critical to achieving successful outcomes. The main goals of the first investment phase, program initiation, are to determine gaps in capabilities and then describe the capabilities to fill the gap—this information is then captured in the mission needs statement. If the mission needs statement is approved, the investment then moves to the concept and technology development phase, which focuses both on setting requirements and important baselines for managing the investment throughout its life cycle. A key step in this phase is translating needs into specific operational requirements, which are captured in the operational requirements document. Operational requirements provide a bridge between the functional requirements of the mission needs statement and the detailed technical requirements that form the basis of the performance specifications, which will ultimately govern development of the system. Once the program has developed its operational requirements document, it then uses these requirements to inform the development of its acquisition program baseline, a critical document that addresses the program’s critical cost, schedule, and performance parameters and is expressed in measurable terms. See figure 2 for a description of the documents. The department’s budget policy has two main objectives: (1) articulate DHS goals and priorities, and (2) develop and implement a program structure and resource planning to accomplish DHS goals. DHS uses the process to determine investment priorities and allocate resources each year. The budget process emphasizes the importance of ensuring investments expend funds appropriately and that investment performance produces the expected benefits or results. IRB decisions and guidance regarding new investments are to be reflected to the extent possible in any iteration of the budget as appropriate. The Office of the Chief Financial Officer (CFO) manages the budget process. DHS has not effectively implemented or adhered to its investment review process due to a lack of involvement by senior officials as well as limited resources and monitoring; consequently, DHS has not identified and addressed cost, schedule, and performance problems in many major investments. Poor implementation largely rests on DHS’s inability to ensure that the IRB and JRC effectively carried out their oversight responsibilities. Of 48 major investments requiring department-level review, 45 were not reviewed in accordance with the department’s investment review policy, and 18 were not reviewed at all. In the absence of IRB and JRC meetings, investment decisions were reached outside of the required review process. Moreover, when IRB meetings were held, DHS did not consistently enforce decisions that were reached because the department did not track whether components and offices took the actions required by the IRB. In addition, 27 major investments have not developed or received DHS approval for basic acquisition documents required to guide and measure the performance of program activities—and the investment review process. Of those, over a third reported cost, schedule, or performance breaches in fiscal year 2007 and second quarter fiscal year 2008. According to DHS representatives, acquisition management practices are still new to many components, and we found 24 investments lacked certified program managers needed to develop basic acquisition documents. We found that two out of nine components do not have required component-level review processes to adequately manage their major investments. DHS has recognized these deficiencies and began efforts in 2007 to clarify and better adhere to the investment review process. Of DHS’s 48 major investments requiring department-level review between fiscal year 2004 and the second quarter of fiscal year 2008, only three had all milestone and annual reviews. Of the 39 level 1 investments requiring IRB review and approval to proceed to the next acquisition phase, as of March 2008, 18 have never been reviewed by the IRB—4 of which have already reached production and deployment. The remaining 21 level 1 investments received at least one milestone or annual review through the investment review process. None of the 9 level 2 investments had JRC review and approval. DHS policy provides that its major investments be reviewed no less than yearly. However, in fiscal year 2007, the most recent year for which data were available, only 7 of the 48 required annual reviews were conducted. As a result, DHS lacked the information needed to address cost, schedule, and performance deficiencies—a problem we identified with over one-third of DHS’s major investments between fiscal year 2007 and the second quarter of fiscal year 2008. In our prior work on the Department of Defense (DOD), we found that when such reviews are skipped or not fully implemented, programs build momentum and move toward product development with little if any early department-level assessment of the costs and feasibility. Committing to programs before they have this knowledge contributes to poor cost, schedule, and performance outcomes. DHS level 1 investments that were never reviewed through the IRB process include some of the department’s largest investments with important national security objectives. For example, the Federal Emergency Management Agency’s (FEMA) Consolidated Alert and Warning System, which has estimated life-cycle costs of $1.6 billion, includes programs to update the Emergency Alerting System and other closely related projects. In 2007, we reported that FEMA faces technical, training, and funding challenges to develop an integrated alert and warning system. Customs and Border Protection’s (CBP) Secure Freight Initiative, which has estimated life-cycle costs of $1.7 billion, is designed to test the feasibility of scanning 100 percent of U.S.-bound cargo containers with nonintrusive equipment and radiation detection equipment at foreign seaports. Earlier this year, we reported that the Secure Freight Initiative faces a number of challenges, including measuring performance outcomes, logistical feasibility of some aspects of the investment, and technological issues. While these two investments are still in the concept and technology development phase, other major investments that have not been reviewed are even further along in the investment life cycle—when problems become more costly to fix. For example, CBP’s Western Hemisphere Travel Initiative, with estimated life-cycle costs of $886 million, is in capability development and demonstration. The investment aims to improve technologies to identify fraudulent documentation at U.S. ports of entry. We recently reported that because key elements of planning for the investment’s management and execution remain uncertain, DHS faces challenges deploying technology, and staffing and training officers to use it. Reviews of the 9 level 2 investments—those with acquisition costs between $50 million and $100 million, or $100 million to $200 million for information technology—were similarly lacking. While the JRC met periodically between fiscal years 2004 and 2006, senior officials stated that it did not make approval decisions about any level 2 investments. As a result, investments such as the following—which are all now in the operations and support phase—were not reviewed and approved by the JRC: FEMA’s Total Asset Visibility, which has $91 million in estimated life-cycle costs, aims to improve emergency response logistics in the areas of transportation, warehousing, and distribution. Transportation and Security Administration’s (TSA) Hazardous Threat Assessment Program, which has $181 million in estimated life-cycle costs, was developed to perform a security threat assessment on applicants for licenses to transport hazardous materials. The National Protection and Programs Directorate’s National Security and Emergency Preparedness investment, which has $1.8 billion in estimated life-cycle costs, aims to provide specially designed telecommunications services to the national security and emergency preparedness communities in the event of a disaster if conventional communication services are ineffective. During 2006, the JRC stopped meeting altogether after the chair was assigned to other duties within the department. DHS representatives recognized that since the JRC stopped meeting in 2006, there has been no direction for requirements or oversight of level 2 investments at the department level and that strengthening the JRC is a top priority. In the meantime, oversight of level 2 investments has devolved to the components. Without the appropriate IRB and JRC milestone reviews, DHS loses the opportunity to identify and address cost, schedule, and performance problems and, thereby, minimize program risk. Fourteen of the investments that lacked appropriate review through IRB and JRC oversight experienced cost growth, schedule delays, and underperformance—some of which was substantial. At least 8 investments reported cost growth between fiscal year 2007 and the second quarter of fiscal year 2008 (see table 3). Other programs experienced schedule delays and underperformance. For example, CBP’s Automated Commercial Environment program reported a 20 percent performance shortfall in the first quarter of fiscal year 2008. Moreover, we reported in July 2008 that the Coast Guard’s Rescue 21 program changed its acquisition baseline or cost, schedule, and performance goals four times resulting in a total 182 percent cost growth and 5-year schedule slip. DHS has acknowledged that the IRB and JRC have not conducted oversight in accordance with DHS policy—largely because the process has depended on direct involvement and availability of high-level leadership as well as a lack of sufficient staff resources to organize the review meetings. According to DHS representatives, the Deputy Secretary was unavailable to commit to the time required to conduct reviews of all investments, so only some major investments were reviewed. Our prior work shows that this problem existed from the start. For example, in 2004, we reported that DHS was having difficulty bringing all of its information technology programs before the IRB in a timely manner. We reported in 2005 that key stakeholders, such as the Chief Procurement Officer, did not receive materials in time to conduct a thorough review and provide meaningful feedback prior to investment review meetings and recommended that DHS ensure that stakeholders, including CPO officials, have adequate time to review investment submissions and provide formal input to decision- making review boards. Moreover, in 2007, we reported that DHS investment boards did not conduct regular investment reviews and control activities were not performed consistently across projects. DHS Chief Procurement Office and Chief Financial Office representatives added that the process was not adequately staffed to conduct annual reviews of investments as required by the investment review policy. We have previously recommended that DHS provide adequate resources, including people, funding, and tools, for oversight of major investments. A 2007 DHS assessment of 37 major investments found that many investments are awaiting senior management review. For example, FEMA’s major investment, the flood map modernization program, requested a key investment review decision meeting in 2004 that was subsequently scheduled and cancelled in 2006. As a result, the program proceeded from development to operations and support without IRB review or approval. Because of these limitations, alternative approaches to obtaining decisions were adopted. Numerous officials reported that rather than going through the formal investment review process, in some cases DHS component officials began to seek approval directly from the Deputy Secretary. For example, in November 2006, the DHS Inspector General reported on the CBP’s Secure Border Initiative program, noting that the investment oversight processes were sidelined in the urgent pursuit of SBInet’s aggressive schedule and that the IRB and JRC processes were bypassed and key decisions about the scope of the program and the acquisition strategy were made without rigorous review and analysis or transparency. DHS officials indicated that some decisions were very informal, based on conversations with the Deputy Secretary and without input from other IRB members. In such cases, the investment review process was bypassed, including consideration of supporting reviews and recommendations. DHS CPO and CFO representatives said they did not always know whether a decision had been made through this informal process. DHS investment review policy requires programs to develop specific documentation that captures key knowledge needed to make informed investment decisions. This approach is similar to DOD’s, which requires adequate knowledge at critical milestones to reduce the risk associated with each phase of the investment’s life cycle and enable program managers to deliver timely, affordable, quality products. GAO’s work on commercial best practices for major acquisitions has demonstrated that this approach, if effectively implemented, can significantly improve program outcomes. Our prior work has found that inadequate attention to developing requirements results in requirements instability, which can ultimately cause cost escalation, schedule delays, and fewer end items. Many major DHS investments do not have basic acquisition information required by investment review policy to guide and measure the performance of program activities and the investment review process. In particular, mission needs statements, operational requirements documents, and acquisition program baselines establish capability gaps, requirements needed to address gaps, and cost, schedule, and performance parameters, respectively. As of March 2008, of the 57 level 1 and 2 investments, 34 were in a phase that required all three documents, but 27 did not have or only provided an unapproved draft of one or more of these documents (see appendix III for the investments lacking these approved documents). Of the 27 investments, we found that over a third reported cost, schedule, or performance breaches between fiscal year 2007 and second quarter fiscal year 2008. For example, the Infrastructure Transformation program, which did not have an approved operational requirements document or acquisition program baseline, reported being up to 19 percent behind schedule in 2007. In another instance, the Immigration and Customs Enforcement (ICE) Detention and removal modernization program, which also lacked an approved operational requirements document and acquisition program baseline, reported schedule slippage of about 20 percent. Without required development and review of key acquisition data, DHS cannot be sure that programs have mitigated risks to better ensure good outcomes. CPO representatives explained that department acquisition management practices are new to many DHS components. For most investments, CPO representatives said that program managers were not familiar with basic acquisition documents and investment oversight staff had to work with program managers to help them develop these documents prior to investment reviews. In addition, we found that in fiscal year 2007, 24 major investments did not have program managers certified by DHS as having the required knowledge and skills to oversee complex acquisition programs. Moreover, other factors such as pressure to get programs up and running, additional external requirements, and technological challenges also impact the ability to successfully manage acquisitions to support good acquisition outcomes. At the same time, some component officials said that they received insufficient and inconsistent guidance regarding what information should be included in key acquisition documents. This issue is long-standing. For example, we reported in 2005 that because of the small number of department oversight staff, only limited support was provided to programs to assist them in completing their submissions for oversight reviews. In addition, component officials told us that key acquisition documents are sometimes approved at the component level but are not reviewed and approved at the department level. For example, TSA officials indicated that documents needed for the Secure Flight and Passenger Screening Programs were approved by TSA and submitted to DHS for approval, but no action was taken to review and approve them. The investment reviews that have been conducted have not always provided the discipline needed to help ensure programs achieve cost, schedule, and performance goals—even when a review identified important deficiencies in an acquisition decision memorandum. DHS has not routinely followed up on whether specific actions required by acquisition decision memorandums to mitigate potential risks have been implemented. The IRB issued a 2004 acquisition decision memorandum approving the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT)—a program that aims to facilitate travel and trade—to move into the capability development and demonstration phase although the IRB found the investment’s cost, schedule, and performance risk to be high. The memorandum stated that more clarity was needed on the program’s end- state capability, benefits related to life-cycle costs, and how it planned to transition to the operations and support phase. We reported that in 2006 DHS had yet to develop a comprehensive plan describing what the end- state capability would be, and how, when, and at what cost it would be delivered. In a 2006 decision memorandum, the IRB again instructed US- VISIT to address the end-state capability, by requiring a comprehensive affordable exit plan for airports, seaports, and landports. We subsequently reported that, as of October 2007, US-VISIT had yet to establish critical investment management processes, such as effective project planning, requirements management, and financial management, which are required to ensure that program capabilities and expected mission outcomes are delivered on time and within budget. In addition, DHS had not developed capability for the other half of US-VISIT, even though it had allocated about one-quarter of a billion dollars to this effort. In a May 2006 decision memorandum, the IRB directed the Cargo Advanced Automated Radiography System investment to develop within 6 months an acquisition program baseline, a concept of operations, and an operational requirements document. It also called for the investment to be reviewed annually. As of the second quarter of fiscal year 2008, a baseline and the concept of operations had been drafted, according to program officials. However, an operational requirements document had not been developed even though a $1.3 billion contract had been awarded for the investment. In addition, the Cargo Advanced Automated Radiography System investment had not yet received a follow-on review by the IRB. In another example, in a December 2006 decision memorandum, the IRB directed ICE’s major investment Automation and Modernization to update its acquisition program baseline, its cost-benefit analysis and its life-cycle cost analysis. Automation and Modernization has since updated its acquisition program baseline, but its cost analyses were last updated in 2005. Current and former CPO and CFO representatives noted that staffing has not been sufficient to review investments in a timely manner and conduct follow-up to ensure decisions are implemented. They indicated that support was needed to undertake a number of functions, including: designing the investment review process, collecting and reviewing investment documentation, preparing analyses to support investment decisions, and organizing review meetings, as well as conducting follow-up for major investments. According to DHS representatives, from 2004 to 2007 there were four full-time equivalent DHS employees plus support from four contractors to fulfill those responsibilities. Many acquisition decision memos provided specific deadlines for components to complete action items, but according to CPO and CFO representatives IRB action items were not tracked. Without follow-up, the IRB did not hold components and major investment program offices accountable for addressing oversight concerns. DHS’s investment review process requires that component heads establish processes and provide the requisite resources to manage approved investments adequately. Component heads are also responsible for approving all level 3 and level 4 investments and ensuring they comply with DHS investment review submission requirements. In the absence of sufficient review at the department level, well-designed component-level processes are particularly critical to ensuring that investments receive some level of oversight. For example, CBP and TSA officials reported that they relied on their component investment review processes to ensure some level of oversight when the department did not review their investments. However, for the nine components we reviewed, two did not have a process in place and others had processes that were either in development or not focused on the entire investment life cycle. For example, the Domestic Nuclear Detection Office and the National Protection and Programs Directorate did not have a formal investment review process, meaning that in the absence of an IRB or JRC review, their eight major investments received no formal review. While FEMA has a process to manage contract-related issues, its review process does not currently address the entire investment life cycle. According to CPO representatives, the department is working with components to ensure that components have a process in place to manage investments and to have them designate an acquisition officer who is accountable for major investments at the component level. DHS has acknowledged that the investment review process has not been fully implemented. In fact, the process has been under revision since 2005. DHS has begun to make improvements to the planning, execution, and performance of major investments as initial steps to clarify and better adhere to the investment review process. To gain an understanding and awareness of DHS’s major investments, in 2007 during the course of our review, the Undersecretary for Management undertook an assessment of 37 major investments conducted under the CPO’s direction. The assessment identified a range of systemic weaknesses in the implementation of its investment review process and in the process itself. The DHS assessment found many level 1 investments await leadership decisions; acquisition decision memos lack rigor; a lack of follow-up and enforcement of oversight decisions; inadequate technical support at the investment level; and unclear accountability for acquisitions at the component level. Many of the deficiencies identified are consistent with our findings. For example, the DHS assessment of Citizenship and Immigration Services (CIS) found that investments were either missing, or using draft or unsigned versions of key investment management documents, limiting DHS’s ability to measure the investments’ performance. In one case, DHS found that the Verification Information Systems investment is poorly defined. In another case, DHS reported that CIS’s investment Transformation was using draft and unsigned acquisition documents, including its mission needs statement, acquisition plan, and acquisition program baseline. In 2007, we reported that: CIS had not finalized its acquisition strategy for Transformation and cost estimates therefore remain uncertain, plans do not sufficiently discuss enterprise architecture alignment and expected project performance, and these gaps create risks that could undermine Transformation’s success as it begins to implement its plans. In addition, DHS found that CIS’s investment Customer Service Web Portal did not have key investment management documents and that the investment’s performance cannot be adequately assessed. Similarly, DHS found that CIS’s investment Integrated Document Production did not have performance measures or documentation that performance metrics have been implemented to measure program cost, schedule, and performance execution. To address the findings of its 2007 review, DHS is taking steps to reiterate the DHS investment review policy and establish a more disciplined and comprehensive investment review process. Beginning in February 2008, interim policies were issued by the Undersecretary for Management to improve management of major investments pending release of a new investment review management directive. Specifically, the Undersecretary for Management issued a memorandum in February 2008 on initiating efforts to improve the quality of acquisition program baselines for level 1 investments, and another in July 2008 on improving life-cycle cost estimating for major investments. To help address the backlog of investments awaiting review, the CPO has begun to review and issue acquisition decision memorandums for each level 1 program. As of August 2008, acquisition decision memorandums had been completed for three programs. The memorandums indicate documentation that must be completed, issues that must be addressed, and related completion dates before investment approval is given. The memorandums also identify any limits or restrictions on the program until those actions are completed. Further, the Undersecretary for Management signed an interim acquisition management directive in November 2008 to improve acquisition management and oversight pending results from a formal DHS executive review. DHS’s annual budget process for funding major investments has not been appropriately informed by the investment review process—largely because the IRB seldom conducts oversight reviews and when it has, the two processes have not been aligned to better ensure funding decisions fulfill mission needs. While DHS’s investment review framework integrates the two processes—an approach similarly prescribed by GAO and OMB capital planning principles—many major investments received funding without determining that mission needs and requirements were justified. In addition, two-thirds of DHS major investments did not have required life-cycle cost estimates, which are essential to making informed budget and capital planning decisions. At the same time, DHS has not conducted regular reviews of its investment portfolios—broad categories of investments—to ensure effective performance and minimize unintended duplication of effort for proposed and ongoing investments. In July 2008, more than one-quarter of DHS’s major investments were designated by OMB as poorly planned and by DHS as poorly performing. The DHS Undersecretary for Management has said that strengthening the links between investment review and budget decisions is a top priority. OMB and GAO capital planning principles underscore the importance of a disciplined decision making and requirements process as the basis to ensure that investments succeed with minimal risk and lowest life-cycle cost. This process should provide agency management with accurate information on acquisition and life-cycle costs, schedules, and performance of current and proposed capital assets. The OMB Capital Programming Guide also stresses the need for agencies to develop processes for making investment decisions that deliver the right amount of funds to the right projects. In addition, OMB and GAO guidance provide that an investment review policy should seek to use long-range planning and a disciplined, integrated budget process for portfolio management to achieve performance goals at the lowest life-cycle cost and least risk to the taxpayer and the government. Investment portfolios are integrated, agencywide collections of investments that are assessed and managed collectively based on common criteria. Managing investments as portfolios is a conscious, continuous, and proactive approach to allocating limited resources among an organization’s competing initiatives in light of the relative benefits expected from these investments. Our prior work at DOD has shown that fragmented decision-making processes do not allow for a portfolio management approach to make investment decisions that benefit the organization as a whole. The absence of an integrated approach can contribute to duplication in programs and equipment that does not operate effectively together. GAO best practices work also emphasizes that (1) a comprehensive assessment of agency needs should be conducted, (2) current capabilities and assets should be identified to determine if and where a gap may lie between current and needed capabilities, and (3) a decision about how best to meet the identified gap should be evaluated. The approved mission needs statement must support the need for a project before the project can proceed to the acquisition phase. OMB guidance states that in creating capital plans, agencies should identify a performance gap between the existing portfolio of agency assets and the mission need that is not filled by the agency’s asset portfolio. Moreover, best practices indicate that investment resources should match valid requirements before approval of investments. The DHS investment review process calls for IRB decisions and program guidance regarding new investments to be reflected to the extent possible in the budget. The DHS budget process consists of overlapping planning, programming, budgeting, and execution phases that examine existing program funding and link funding to program performance to ensure funds are expended appropriately and produce the expected results and benefits (see fig. 3). Annually, components submit resource allocation proposals for major investments to the CFO for review in March and, in turn, resource allocation decisions are provided to components in July. According to CFO representatives, information from investment oversight reviews would be useful to inform investment annual resource allocation decisions. CFO representatives explained that the CFO sought to align resource allocation decisions with the IRB approvals in 2006, but this was not possible because of the erratic investment review meeting schedule. As a result, a number of CFO and CPO representatives confirmed that funding decisions for major investments have not been contingent upon the outcomes of the investment review process. One of the primary functions of the IRB is to review and approve level 1 investments for formal entry into the annual budget process. However, we found that 18 of DHS’s 57 major investments did not have an approved mission needs statement—a document that formally acknowledges that the need is justified and supported. Specifically, the statement summarizes the investment requirement, the mission or missions that the investment is intended to support, the authority under which the investment was begun, and the funding source for the investment. As such, approval of the mission needs statement is required at the earliest stages of an investment. Lacking information on which major investments have validated mission needs, the CFO has allocated funds for major investments for which a capability gap has not been established. We reported in 2007 that DHS risked selecting investments that would not meet mission needs in the most cost-effective manner. The 18 investments that lacked an approved mission needs statement accounted for more than half a billion dollars in estimated fiscal year 2008 appropriations (see table 4). In addition, two thirds of major investment budget decisions were reached without a life-cycle cost estimate. A life-cycle cost estimate provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program. Life-cycle costing enhances decision making, especially in early planning and concept formulation of acquisition and can support budgetary decisions, key decision points, milestone reviews, and investment decisions. GAO and OMB guidance emphasize that reliable cost estimates are important for program approval and continued receipt of annual funding. DHS policy similarly provides that life-cycle cost estimates are essential to an effective budget process and form the basis for annual budget decisions. However, 39 of the 57 level 1 and level 2 major DHS investments we reviewed did not have a life-cycle cost estimate. Moreover, DHS’s 2007 assessment of 37 major investments also found investments without life-cycle cost estimates and noted poor cost estimating as a systemic issue. Without such estimates, DHS major investments are at risk of experiencing cost overruns, missed deadlines, and performance shortfalls. Cost increases often mean that the government cannot fund as many programs as intended. To begin to address this issue, the DHS Undersecretary for Management issued a memo in July 2008 initiating an effort to review and improve the credibility of life-cycle cost estimates for all level 1 investments prior to formal milestone approval. The JRC is responsible for managing the department’s level 1 and level 2 major investment portfolios and making portfolio-related recommendations to the IRB. Managing investments as portfolios is a continuous and proactive approach to allocating finite resources among an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking a portfolio perspective allows an agency to determine how its collective investments can optimally address its strategic goals and objectives. As part of this responsibility, the JRC is expected to identify crosscutting opportunities and overlapping or common requirements and determine how best to ensure that DHS uses its finite resources wisely in those areas. Specifically, the JRC reviews investments to identify duplicative mission capabilities and to assess redundancies. While a certain amount of redundancy can be beneficial, our prior work has found that unintended duplication indicates the potential for inefficiency and waste. The Enterprise Architecture Board supports the JRC by overseeing the department’s enterprise architecture and performing technical reviews of level 1 and level 2 IT investments. In 2007, we reported that DHS did not have an explicit methodology and criteria for determining program alignment to the architecture. We further reported that DHS policies and procedures for portfolio management had yet to be defined, and as a result, control of the department’s investment portfolios was ad hoc. When it met regularly, the JRC played a key role in identifying several examples of overlapping investments, including passenger screening programs. Specifically, in March 2006, the JRC identified programs that had potential overlaps, including TSA’s Secure Flight, TSA’s Registered Traveler, and CBP’s Consolidated Registered Traveler programs, yet the programs lacked coordination and were struggling with interoperability and information sharing. Because the JRC stopped meeting soon thereafter, DHS may have missed opportunities to follow up on these cases or identify further cases of potential overlap. In 2007, we reported that while TSA and CBP had begun coordinating efforts, they had yet to align their passenger prescreening programs to identify potential overlaps and minimize duplication. We recommended that DHS take additional steps and make key policy and technical decisions that were necessary to more fully coordinate these programs. TSA and CBP have since worked with DHS to develop a strategy to align regulatory policies and coordinate efforts to facilitate consistency across their programs. In another case, we reported that CIS’s Transformation investment has been conducted in an ad hoc and decentralized manner, and, in certain instances, is duplicative with other IT investments. DHS’s 2007 assessment of 37 major investments also identified potential overlap and duplication of effort between investments. Overall the review found that limited communication and coordination across components led to overlapping DHS programs. For example, DHS found that the CIS Verification Information System had potential duplication of requirements implementation with National Protection and Program Directorate’s U.S. Computer Emergency Readiness Team investment. In another instance, DHS found the CIS Integrated Document Production investment had an unclear relationship to other DHS credentialing investments. OMB requires all agencies including DHS to submit program justification documents for major investments to inform both quantitative decisions about budgetary resources consistent with the administration’s program priorities, and qualitative assessments about whether the agency’s programming processes are consistent with OMB policy and guidance. To help ensure that investments of public resources are justified and that public resources are wisely invested, OMB began using a Management Watch List in the President’s fiscal year 2004 budget request as a means to oversee the justification for and planning of agencies’ information technology investments. This list was derived based on a detailed review of each investment’s Capital Asset Plan and Business Case. In addition, OMB has established criteria for agencies to use in designating high-risk projects that require special attention from oversight authorities and the highest levels of agency management. These projects are not necessarily at risk of failure, but may be on the list because of one or more of the following four reasons: The agency has not consistently demonstrated the ability to manage complex projects. The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total portfolio. The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. Delay or failure of the project would introduce for the first time unacceptable or inadequate performance or failure of an essential mission function of the agency, a component of the agency, or another organization. According to DHS officials, without input from investment oversight reviews, a limited budget review of program justification documents prior to OMB submittal can be the only oversight provided for some DHS major investments. CFO representatives told us that in the absence of investment review decisions, they rely on the best available information provided by program managers in order to determine if funding requests are reasonable. As a result, major investment programs can proceed regardless of whether the investment has received the appropriate IRB review or has required acquisition documents. We reported that as of July 2008, 15 DHS major investments are on both the OMB Management Watch List and list of high-risk projects with shortfalls, meaning that they are both poorly planned and poorly performing. According to DHS officials, the funding, programming, and budget execution process is not integrated into the requirements and acquisition oversight process and the DHS Undersecretary for Management has said that strengthening these processes is a top priority. The challenges DHS faces in implementing its investment review process are long-standing and have generally resulted in investment decisions that are inconsistent with established policy and oversight. Concurrent with this lack of oversight are acquisition programs worth billions of dollars with cost, schedule, and performance deficiencies. Weaknesses in some component management practices compound the problem leaving investments with little to no scrutiny or review. While the department’s process has been under revision since 2005, DHS has begun new efforts to clarify and better adhere to the investment review process. Without validating mission needs, requirements, and program baselines including costs, as well as identifying duplicative efforts and monitoring progress, DHS cannot appropriately manage investments and inform the budget process. Until DHS aligns oversight of major investments with annual budget decisions, the department is at risk of failing to invest in programs that maximize resources to address capability gaps and ultimately help meet critical mission needs. We recommend that the Secretary of Homeland Security direct the Undersecretary for Management to take the following five actions to better ensure the investment review process is fully implemented and adhered to: Establish a mechanism to identify and track on a regular basis new and ongoing major investments and ensure compliance with actions called for by investment oversight boards. Reinstate the JRC or establish another departmental joint requirements oversight board to review and approve acquisition requirements and assess potential duplication of effort. Ensure investment decisions are transparent and documented as required. Ensure that budget decisions are informed by the results of investment reviews including IRB approved acquisition information and life cycle cost estimates. Identify and align sufficient management resources to implement oversight reviews in a timely manner throughout the investment life cycle. To improve investment management, we recommend that the Secretary of Homeland Security direct component heads to take the following two actions: Ensure that components have established processes to manage major investments consistent with departmental policies. Establish a mechanism to ensure major investments comply with established component and departmental investment review policy standards. We provided a draft of this report to DHS for review and comment. In written comments, the department generally concurred with our findings and recommendations, citing actions taken and efforts under way to improve the investment review process. The department’s comments are reprinted in appendix II. DHS components also provided technical comments which we incorporated as appropriate and where supporting documentation was provided. In addition, several DHS components and offices reported additional progress since the time of our review to ensure their major investments comply with departmental policies. DHS is taking important steps to strengthen investment management and oversight. After being under revision since 2005, DHS issued a new interim management directive on November 7, 2008, that outlines a revised acquisition and investment review process. DHS also cited two new offices within the Chief Procurement Office that were established to provide better acquisition management and oversight; recently completed program reviews; and plans to revise training, standards, and certification processes for program managers. While many of these efforts are noted in our report, investment management and oversight has been an ongoing challenge since the department was established, and continued progress and successful implementation of these recent efforts will require sustained leadership and management attention. DHS stated that the new interim acquisition management directive will address many of our recommendations; however, our work has found that DHS has not fully implemented similar steps in the past. For example, in response to our first recommendation, to establish a mechanism to identify and track on a regular basis new and ongoing major investments and ensure compliance with actions called for by investment review board decisions, DHS’s new interim directive requires major programs to participate in an acquisition reporting process. While DHS is in the process of implementing a Next Generation Periodic Reporting System, it is too soon to tell whether this system will be successfully implemented. DHS’s first-generation periodic reporting system was never fully implemented, making it difficult for the department to track and enforce investment decisions. In response to our second recommendation, to reinstate the JRC or establish another departmental joint requirements oversight board to review and approve acquisition requirements and assess potential duplication of effort, DHS stated it has already developed a new Strategic Requirements Review process to assess capability needs and gaps; completed pilots; and briefed senior leadership. According to DHS’s new interim acquisition management directive, the results of this process are to be validated by the JRC, which is still in the process of being established and for which no timeline was provided. Further, as we found in this report, when the JRC was previously established in 2004, it was never fully implemented due to a lack of senior management officials’ involvement. In response to our third recommendation, that DHS ensure investment decisions are transparent and documented as required, DHS stated that its new interim acquisition management directive already implements this by requiring acquisition documentation for each acquisition decision event and capturing decisions and actions in acquisition decision memorandums. DHS also reported that it has conducted eight Acquisition Review Board meetings with documented Acquisition Decision Memorandums. While this progress is notable, our work has found that since 2004, DHS’s investment review board has not been able to effectively carry out its oversight responsibilities and keep pace with investments requiring review due to a lack of senior officials’ involvement as well as limited monitoring and resources. It is too soon to tell whether DHS’s latest efforts will be sustained to ensure investments are consistently reviewed as needed. Regarding our fourth recommendation, that the department ensure budget decisions are informed by the results of investment reviews, the new interim management directive creates a link between the budget and requirements processes and describes interfaces with other investment processes. While this process is more clearly established in the new directive, its implementation will be evidenced in the documents produced during upcoming budget cycles. We found in this report that the previous investment review process also highlighted links to the budget and other investment processes, yet the results of oversight reviews did not consistently inform budget decisions. In response to our fifth recommendation, to identify and align sufficient management resources to implement oversight reviews in a timely manner throughout the investment life cycle, DHS stated that it has partially implemented the recommendation by establishing a senior executive–led Acquisition Program Management Division within the Office of the CPO and plans to increase staffing from its current level of 12 experienced acquisition and program management specialists to 58 by the end of fiscal year 2010. Creating a new division to manage oversight reviews is a positive step; however, we have found that DHS has been challenged to provide sufficient resources to support its acquisition oversight function and the CPO’s office has had difficulty filling vacancies in the past. Regarding our two recommendations to improve investment management at the component level, DHS noted that the new interim management directive requires components to align their internal policies and procedures by the end of the third quarter of fiscal year (June) 2009. In addition, DHS plans to issue another management directive which will instruct component heads to create component acquisition executives in their organizations to be responsible for the implementation of management and oversight of component acquisition processes. If fully implemented, these steps should help to ensure that components have established processes to manage major investments. DHS further noted that establishment of the Acquisition Program Management Division, the new interim acquisition management directive, reestablishment of the acquisition review process, and other steps work together to ensure major investments comply with established component and departmental investment review policy standards. To implement this recommendation, the new component acquisition executives will need to be in place and successfully implement and ensure compliance with the new processes. DHS will continue to face ongoing challenges to implementing an effective investment review process identified in this report and highlighted in the department’s Integrated Strategy for High Risk Management. For example, consistent with our findings, the strategy cites challenges to ensuring availability of leadership to conduct investment reviews; timely collection and assessment of program data; and sufficient staff to support the investment review process. Sustained leadership focus will be even more critical to implement changes and maintain progress on acquisition management challenges as the department undergoes its first executive branch transition in 2009. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution for 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and the Secretary of Homeland Security. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were Amelia Shachoy, Assistant Director; William Russell; Laura Holliday; Nicole Harkin; Patrick Peterson; Karen Sloan; Marie Ahearn; and Kenneth Patton. Our objectives were to: (1) evaluate DHS’s implementation of the investment review process, and (2) assess DHS’s integration of the investment review and budget processes to ensure major investments fulfill mission needs. To assess how the investment review process has been implemented, we reviewed the DHS Investment Review Process management directive and corresponding handbook to determine which major investments required DHS review. In doing so, we focused on determining such key factors as how frequently major investments required oversight reviews and what documents such as mission need statements and acquisition program baselines are required to be approved by DHS executive review boards. We included in our analyses 57 level 1 and level 2 investments that DHS identified for fiscal year 2008. We determined the level of oversight provided to 48 of these major investments—those that required department-level review from fiscal year 2004 through the second quarter of fiscal year 2008. We also interviewed representatives of the Chief Procurement Office (CPO), Chief Financial Office (CFO), and Chief Information Office as well as nine DHS components and offices that manage major investments. We then collected investment review and program documents for each major investment and compared the information to investment review policy requirements. We also reviewed acquisition decision memorandums from fiscal year 2004 through the second quarter of fiscal year 2008. Based on the decision memos and investment information, we determined how many investments had been reviewed in accordance with DHS policy from fiscal year 2004 through the second quarter of fiscal year 2008. We also reviewed prior GAO reports on DHS programs as well as commercial best practices for acquisition. We reviewed DHS documents such as interim policy memos and guidance and interviewed CPO staff regarding planned revisions to the investment review process. We also compared our findings with a 2007 DHS internal assessment of 37 major investments. In addition, we reviewed available DHS periodic reports on major investments as well as component operational status reports to identify instances of cost growth, schedule slips, and performance shortfalls for major investments and to determine the status of program manager certification in fiscal year 2007 through the second quarter of fiscal year 2008. This information is self-reported by DHS major program offices and all programs did not always provide complete information, and we did not independently verify information in these reports. To assess the integration of investment review and the budget process, we reviewed DHS management directives for the investment review and the planning, programming, budgeting, and execution process as well as corresponding guidance. We also interviewed representatives from the Chief Procurement Office and Chief Financial Office to discuss how the processes have been integrated since 2004. We used investment data and acquisition documents from each major investment program to determine which had required life-cycle cost estimates and other documents such as a validated mission need statements. We also reviewed fiscal year 2009 DHS budget justification submissions to OMB. We compared DHS budget practices with GAO and Office of Management and Budget (OMB) guidance on capital programming principles. In addition, we reviewed relevant GAO reports. We conducted this performance audit from September 2007 until November 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Mission needs statements, operational requirements documents, and acquisition program baselines establish capability gaps, requirements needed to address those gaps, and cost, schedule, and performance parameters, respectively. Of the 57 DHS level 1 and 2 investments, 34 were in an acquisition phase that required all three documents; 27 either did not develop the document or only provided an unapproved draft of one or more of these documents (see table 5). Some major investment programs provided acquisition program baselines approved at the component level that were submitted but did not receive department review and approval. Appendix IV: Department of Homeland Security Investments Reviewed by GAO Provides fusion and visualization of information to create timely and accurate situational awareness reports for the Secretary of Homeland Security, the White House, and other users to detect, deter, and prevent terrorist activities Homeland Security Information System Facilitates information sharing and collaboration across DHS and its partners; enables real- time sharing of threat information for tactical first-responder support; and supports decision making in a real time secure environment. Designed to support processing of applications and petitions, capture fees and provide funds control, provide case status and support, and record the results of the adjudication of each application and petition. Established to provide: naturalization processing, interface with associated databases, improved accuracy, and more timely and accurate information to the public. Provides resources for all web development and maintenance activities. Includes web content management, development of web based transactions with Citizenship and Immigration Services customers and staff, web site maintenance, and web site hosting. Provides integrated card production system printers’ hardware and software, operational contract support, and facilities required to print secure cards granting immigration privileges or benefits to applicants. A system to allow all new immigration benefits applications and petitions to be filed electronically through a Citizenship and Immigration Services Internet web-based portal. Citizenship and Immigration Services will have a more comprehensive view of the customer and any potentially fraudulent transactions; improved audit functionality and record management; better resource management; and increased sharing of information within DHS and with other agency partners such as Justice and State. Supports the Systematic Alien Verification for Entitlements Program by providing automated status-verification information to federal, state, and local benefit-granting and entitlement agencies, and the E-Verify program by allowing participating employers to verify their new employees are authorized to work in the United States. Aims to replace and modernize most of the Coast Guard’s fleet of offshore cutters, boats, aircraft, and command and control systems over 25 years. Supports incident response, contingency planning, violation reporting and processing, vessel casualty investigation and analysis, vessel documentation, user fee collection, analysis of mission performance, monitoring of program effectiveness. Will implement a nationwide system for tracking and exchanging information with identification system equipped vessels operating in or approaching U.S. waters to improve homeland security and enhance Coast Guard and DHS operational mission performance. Command, control and communication system that improves mission execution in coastal zones. Essential to meet Search and Rescue program goals. Results in improved response to distress calls and better coordination and interoperability with other government agencies and first responders. Intended to replace the aging 41-foot utility boats and other large non-standard boats with assets more capable of meeting all of the Coast Guard multi-mission operational requirements. A collection of systems or applications used to provide vessel logistics information management capacity to the Coast Guard. Customs and Border Protection (CBP) Automated Commercial Environment Web-based import and export system that consolidates seven systems into one portal. It will provide advanced technology and information to decide, before a shipment reaches U.S. borders, what cargo should be targeted, and what cargo should be expedited. Intranet-based enforcement and decision support tool that is the cornerstone for all CBP targeting efforts. CBP uses the system to improve the collection, use, analysis, and dissemination of information to target, identify, and prevent potential terrorists and terrorist weapons from entering the United States and identify other violations and violators of U.S. law. Will build additional facilities to meet the needs of CBP’s expansion of its Border Patrol agent staffing. The recent addition of more agents and technology into enforcement activities has exceeded existing facility capacity. Framework used by trusted traveler programs for registering enrollees and performing identification and validation using automated systems. Technologies support the interdiction of weapons of mass destruction and effect, contraband, and illegal aliens being smuggled across the United States border, while having a minimal impact on the flow of legitimate commerce. Aims to integrate technology and tactical infrastructure into a comprehensive border security suite. This system will improve agent ability to respond to illegal activity and help DHS manage, control, and secure the border. Phase I will deploy next-generation technology and integrated systems to scan maritime containers for radiation or other special nuclear material. Will help develop an integrated and coordinated air and marine force to detect, interdict and prevent acts of terrorism arising from unlawful movement of people, illegal drugs and other contraband toward or across the borders of the United States. The goal is to modernize and standardize the existing CBP air and marine fleets and will require a specific number of primary and secondary air and marine locations and additional personnel to meet growing needs. Consolidated business case between CBP and ICE that will modernize: subject record “watch list” processing, inspection support at ports of entry, as well as case management. Western Hemisphere Travel Initiative Will fulfill the regulatory requirement to develop and implement a system to verify that U.S. and non-U.S. citizens present an authorized travel document denoting identity and citizenship when entering the United States. Provides a state-of-the-art, flexible, secure through security certification and accreditation, classified, collateral, integrated, and centrally managed enterprise wide-area network. Includes the consolidated DHS IT infrastructure environments which support the cross- organizational missions of protecting the homeland from a myriad of threats. These IT infrastructure investments are critical to providing a foundation in which information can be disseminated and shared across all DHS components, including external customers and intelligence partners, in a secure, cost effective, and efficient manner. Aims to achieve compliant financial management services and optimize financial management operations across the diverse systems cobbled together in 2003 when DHS was created from 22 agencies and over 200,000 people. Aims to improve and consolidate DHS’s vast array of payroll and personnel systems. It will provide DHS with a common flexible suite of human resource business systems. Its systems will develop, procure, and deploy current and next generation passive cargo portal units at the nation’s borders. Will deliver an advanced imaging system that will automatically detect high density material, detecting shielding that could be used to hide special nuclear material and highly enriched uranium or weapons grade plutonium. The system aims to improve throughput rates providing more effective scanning of a higher portion of cargo at the nation’s ports of entry. An integrated system to collect, analyze and distribute status, alarms, alert, and spectral data from all radiation portal monitors and equipment deployed at the Federal, State, Local, Tribal and international levels. Federal Emergency Management Agency (FEMA) Consolidated Alert & Warning System Provides the president, governors, mayors, tribal leadership with the ability to speak to the American people in the event of a national emergency by providing an integrated, survivable, all-hazards public alert and warning system that leverages all available technologies and transmission paths. It will also provide "situation awareness" to the public and leadership at multiple levels of government in an emergency. Provides information exchange delivery mechanisms through a portal for disaster information, an information exchange backbone, and data interoperability standards. Established a technology-based, cost effective process for updating, validating, and distributing flood risk data and digitalized flood maps throughout the Nation. Provides inspection staff and logistics at a moment’s notice to any Presidentially declared disaster. The state of readiness is 24 hours a day, 7 days a week, 365 days a year. Provides FEMA, emergency support function partners, and state decision makers with visibility of disaster relief assets and shipments to help ensure that the right assets are delivered in the right quantities to the right locations at the right time. Immigration and Customs Enforcement (ICE) Aims to satisfy three fundamental requirements: 1) house a growing population of illegal aliens, 2) provide appropriate conditions of confinement and 3) maintain its facility infrastructure. These requirements must be met through a series of design and build actions that begin with establishing facility infrastructure, continue with establishing detention capacity and culminate in building secure housing facilities. IT modernization and automation initiative that serves as the principal ICE program to: enhance ICE’s technology foundation, maximize workforce productivity, secure the IT environment, and improve information sharing across ICE and DHS. Detention and Removal Modernization Will provide operations management and field personnel the technical tools necessary to apprehend, detain, and remove illegal aliens in a cost-effective manner. Web-based system that manages data on schools, program sponsors, foreign students, exchange visitors, and their dependents during their approved participation in the U.S. education system so that only legitimate visitors enter the US. Survivable network connecting DHS with sectors that restore the infrastructure: electricity, IT and communications; states' homeland security advisors; and sector-specific agencies and resources for each critical infrastructure sector. Collects, catalogs and maintains standardized and quantifiable, risk-related infrastructure information to enable the execution of national risk management and for prioritizing the data for use by DHS partners. Aims to provide specially designed telecommunications services to the national security and emergency preparedness user community during natural or man-made disasters when conventional communications services are ineffective. These telecommunication services are used to coordinate response and recovery efforts and, if needed, to assist with facilitating the reconstitution of the government. Combines the capabilities of four existing investments to form a fully integrated IT system that will help fulfill the organization’s mission to collect, analyze, and respond to cyber security threats and vulnerabilities pursuant to its mission and authorities. Program is to collect, maintain, and share information, including biometric identifiers, on foreign nationals to determine whether an individual (1) should be prohibited from entering the United States; (2) can receive, extend, change, or adjust immigration status; (3) has overstayed or otherwise violated the terms of admission; (4) should be apprehended or detained for law enforcement action; or (5) needs special protection/attention (e.g., refugees). The vision of the US-VISIT Program is to deploy end-to-end management of data on foreign nationals covering their interactions with U.S. immigration and border management officials before they enter, when they enter, while they are in the U. S., and when they exit. Information Technology investment with a mission of providing early detection and characterization of a biological attack on the United States. National Bio and Agro-Defense Facility Infrastructure investment to support the Science and Technology Chemical and Biological Division program, which provides the technologies and systems needed to anticipate, deter, detect, mitigate, and recover from possible biological attacks on this nation’s population, agriculture or infrastructure. The program operates laboratories and biological detection systems and conducts research. Infrastructure investment to support the Science and Technology Chemical and Biological Division program, a key component in implementing the Presidents National Strategy for Homeland Security by addressing the need for substantial research into relevant biological and medical sciences to better detect, and mitigate the consequences of biological attacks and to conduct risk assessments. The program operates laboratories and biological detection systems and conducts research. Transportation Security Administration (TSA) Implements a national checked-baggage screening system to protect against criminal and terrorist threats, while minimizing transportation industry and traveling public burdens. An airborne communication system of systems (air-to-ground, ground-to-air, air-to-air and intra-cabin) that will give Air Marshall and other Law Enforcement Officers access to wireless communications and the ability to share information while in flight. System to manage the schedules of federal air marshals given the flights available (~25,000 per day) and the complexities of last minute changes due to flight cancellations. Hazmat Threat Assessment Program Leverages existing intelligence data to perform threat assessments on commercial truck drivers who transport hazardous materials to determine threat status to transportation security. Provides the resources required to deploy and maintain passenger screening and carry-on baggage screening equipment and processes at approximately 451 airports nationwide in order to minimize the risk of injury or death of people or damage of property due to hostile acts of terrorism. Will strengthen the security of the nation’s transportation systems by creating, implementing, and operating a threat-based watch list matching capability for approximately 250 million domestic air carrier passengers per year. Will improve security by establishing a system-wide common secure biometric credential, used by all transportation modes, for personnel requiring unescorted physical and/or logical access to secure areas of the trans system. Provides common environment for hosting applications; integrated data infrastructure; content; and a collection of shared services.
In fiscal year 2007, the Department of Homeland Security (DHS) obligated about $12 billion for acquisitions to support homeland security missions. DHS's major investments include Coast Guard ships and aircraft; border surveillance and screening equipment; nuclear detection equipment; and systems to track finances and human resources. In part to provide insight into the cost, schedule, and performance of these acquisitions, DHS established an investment review process in 2003. However, concerns have been raised about how well the process has been implemented--particularly for large investments. GAO was asked to (1) evaluate DHS's implementation of the investment review process, and (2) assess DHS's integration of the investment review and budget processes to ensure major investments fulfill mission needs. GAO reviewed relevant documents, including those for 57 DHS major investments (investments with a value of at least $50 million)--48 of which required department-level review through the second quarter of fiscal year 2008; and interviewed DHS headquarters and component officials. While DHS's investment review process calls for executive decision making at key points in an investment's life cycle--including program authorization--the process has not provided the oversight needed to identify and address cost, schedule, and performance problems in its major investments. Poor implementation of the process is evidenced by the number of investments that did not adhere to the department's investment review policy--of DHS's 48 major investments requiring milestone and annual reviews, 45 were not assessed in accordance with this policy. At least 14 of these investments have reported cost growth, schedule slips, or performance shortfalls. Poor implementation is largely the result of DHS's failure to ensure that its Investment Review Board (IRB) and Joint Requirements Council (JRC)--the department's major acquisition decision-making bodies--effectively carried out their oversight responsibilities and had the resources to do so. Regardless, when oversight boards met, DHS could not enforce IRB and JRC decisions because it did not track whether components took actions called for in these decisions. In addition, many major investments lacked basic acquisition documents necessary to inform the investment review process, such as program baselines, and two out of nine components--which manage a total of 8 major investments--do not have required component-level processes in place. DHS has begun several efforts to address these shortcomings, including issuing an interim directive, to improve the investment review process. The investment review framework also integrates the budget process; however, budget decisions have been made in the absence of required oversight reviews and, as a result, DHS cannot ensure that annual funding decisions for its major investments make the best use of resources and address mission needs. GAO found almost a third of DHS's major investments received funding without having validated mission needs and requirements--which confirm a need is justified--and two-thirds did not have required life- cycle cost estimates. At the same time, DHS has not conducted regular reviews of its investment portfolios--broad categories of investments that are linked by similar missions--to ensure effective performance and minimize unintended duplication of effort for investments. Without validated requirements, life-cycle cost estimates, and regular portfolio reviews, DHS cannot ensure that its investment decisions are appropriate and will ultimately address capability gaps. In July 2008, 15 of the 57 DHS major investments reviewed by GAO were designated by the Office of Management and Budget as poorly planned and by DHS as poorly performing.
A domestic bioterrorist attack is considered to be a low-probability event, in part because of the various difficulties involved in successfully delivering biological agents to achieve large-scale casualties. However, a number of cases involving biological agents, including at least one completed bioterrorist act and numerous threats and hoaxes, have occurred domestically. In 1984, a group intentionally contaminated salad bars in restaurants in Oregon with salmonella bacteria. Although no one died, 751 people were diagnosed with foodborne illness. Some experts predict that more domestic bioterrorist attacks are likely to occur. The burden of responding to such an attack would fall initially on personnel in state and local emergency response agencies. These “first responders” include firefighters, emergency medical service personnel, law enforcement officers, public health officials, health care workers (including doctors, nurses, and other medical professionals), and public works personnel. If the emergency were to require federal disaster assistance, federal departments and agencies would respond according to responsibilities outlined in the Federal Response Plan. Several groups, including the Advisory Panel to Assess Domestic Response Capabilities for Terrorism Involving Weapons of Mass Destruction (known as the Gilmore Panel), have assessed the capabilities at the federal, state, and local levels to respond to a domestic terrorist incident involving a weapon of mass destruction (WMD), that is, a chemical, biological, radiological, or nuclear agent or weapon. While many aspects of an effective response to bioterrorism are the same as those for any disaster, there are some unique features. For example, if a biological agent is released covertly, it may not be recognized for a week or more because symptoms may not appear for several days after the initial exposure and may be misdiagnosed at first. In addition, some biological agents, such as smallpox, are communicable and can spread to others who were not initially exposed. These differences require a type of response that is unique to bioterrorism, including infectious disease surveillance, epidemiologic investigation, laboratory identification of biological agents, and distribution of antibiotics to large segments of the population to prevent the spread of an infectious disease. However, some aspects of an effective response to bioterrorism are also important in responding to any type of large-scale disaster, such as providing emergency medical services, continuing health care services delivery, and managing mass fatalities. Federal spending on domestic preparedness for terrorist attacks involving WMDs has risen 310 percent since fiscal year 1998, to approximately $1.7 billion in fiscal year 2001, and may increase significantly after the events of September 11, 2001. However, only a portion of these funds were used to conduct a variety of activities related to research on and preparedness for the public health and medical consequences of a bioterrorist attack. We cannot measure the total investment in such activities because departments and agencies provided funding information in various forms—as appropriations, obligations, or expenditures. Because the funding information provided is not equivalent, we summarized funding by department or agency, but not across the federal government (see apps. I and II). Reported funding generally shows increases from fiscal year 1998 to fiscal year 2001. Several agencies received little or no funding in fiscal year 1998. For example, within the Department of Health and Human Services (HHS), the Centers for Disease Control and Prevention’s (CDC) Bioterrorism Preparedness and Response Program was established and first received funding in fiscal year 1999 (see app. I and app. II). Its funding has increased from approximately $121 million at that time to approximately $194 million in fiscal year 2001. Research is currently being done to enable the rapid identification of biological agents in a variety of settings; develop new or improved vaccines, antibiotics, and antivirals to improve treatment and vaccination for infectious diseases caused by biological agents; and develop and test emergency response equipment such as respiratory and other personal protective equipment. Appendix I provides information on the total reported funding for all the departments and agencies carrying out research, along with examples of this research. The Department of Agriculture (USDA), Department of Defense (DOD), Department of Energy, HHS, Department of Justice (DOJ), Department of the Treasury, and the Environmental Protection Agency (EPA) have all sponsored or conducted projects to improve the detection and characterization of biological agents in a variety of different settings, from water to clinical samples (such as blood). For example, EPA is sponsoring research to improve its ability to detect biological agents in the water supply. Some of these projects, such as those conducted or sponsored by DOD and DOJ, are not primarily for the public health and medical consequences of a bioterrorist attack against the civilian population, but could eventually benefit research for those purposes. Departments and agencies are also conducting or sponsoring studies to improve treatment and vaccination for diseases caused by biological agents. For example, HHS’ projects include basic research sponsored by the National Institutes of Health to develop drugs and diagnostics and applied research sponsored by the Agency for Healthcare Research and Quality to improve health care delivery systems by studying the use of information systems and decision support systems to enhance preparedness for the delivery of medical care in an emergency. In addition, several agencies, including the Department of Commerce’s National Institute of Standards and Technology and DOJ’s National Institute of Justice are conducting research that focuses on developing performance standards and methods for testing the performance of emergency response equipment, such as respirators and personal protective equipment. Federal departments’ and agencies’ preparedness efforts have included efforts to increase federal, state, and local response capabilities, develop response teams of medical professionals, increase availability of medical treatments, participate in and sponsor terrorism response exercises, plan to aid victims, and provide support during special events such as presidential inaugurations, major political party conventions, and the Superbowl. Appendix II contains information on total reported funding for all the departments and agencies with bioterrorism preparedness activities, along with examples of these activities. Several federal departments and agencies, such as the Federal Emergency Management Agency (FEMA) and CDC, have programs to increase the ability of state and local authorities to successfully respond to an emergency, including a bioterrorist attack. These departments and agencies contribute to state and local jurisdictions by helping them pay for equipment and develop emergency response plans, providing technical assistance, increasing communications capabilities, and conducting training courses. Federal departments and agencies have also been increasing their own capacity to identify and deal with a bioterrorist incident. For example, CDC, USDA, and the Food and Drug Administration (FDA) are improving surveillance methods for detecting disease outbreaks in humans and animals. They have also established laboratory response networks to maintain state-of-the-art capabilities for biological agent identification and the characterization of human clinical samples. Some federal departments and agencies have developed teams to directly respond to terrorist events and other emergencies. For example, HHS’ Office of Emergency Preparedness (OEP) created Disaster Medical Assistance Teams to provide medical treatment and assistance in the event of an emergency. Four of these teams, known as National Medical Response Teams, are specially trained and equipped to provide medical care to victims of WMD events, such as bioterrorist attacks. Several agencies are involved in increasing the availability of medical supplies that could be used in an emergency, including a bioterrorist attack. CDC’s National Pharmaceutical Stockpile contains pharmaceuticals, antidotes, and medical supplies that can be delivered anywhere in the United States within 12 hours of the decision to deploy. The stockpile was deployed for the first time on September 11, 2001, in response to the terrorist attacks on New York City. Federally initiated bioterrorism response exercises have been conducted across the country. For example, in May 2000, many departments and agencies took part in the Top Officials 2000 exercise (TOPOFF 2000) in Denver, Colorado, which featured the simulated release of a biological agent. Participants included local fire departments, police, hospitals, the Colorado Department of Public Health and the Environment, the Colorado Office of Emergency Management, the Colorado National Guard, the American Red Cross, the Salvation Army, HHS, DOD, FEMA, the Federal Bureau of Investigation (FBI), and EPA. Several agencies also provide assistance to victims of terrorism. FEMA can provide supplemental funds to state and local mental health agencies for crisis counseling to eligible survivors of presidentially declared emergencies. In the aftermath of the recent terrorist attacks, HHS released $1 million in funding to New York State to support mental health services and strategic planning for comprehensive and long-term support to address the mental health needs of the community. DOJ’s Office of Justice Programs (OJP) also manages a program that provides funds for victims of terrorist attacks that can be used to provide a variety of services, including mental health treatment and financial assistance to attend related criminal proceedings. Federal departments and agencies also provide support at special events to improve response in case of an emergency. For example, CDC has deployed a system to provide increased surveillance and epidemiological capacity before, during, and after special events. Besides improving emergency response at the events, participation by departments and agencies gives them valuable experience working together to develop and practice plans to combat terrorism. Federal departments and agencies are using a variety of interagency plans, work groups, and agreements to coordinate their activities to combat terrorism. However, we found evidence that coordination remains fragmented. For example, several different agencies are responsible for various coordination functions, which limits accountability and hinders unity of effort; several key agencies have not been included in bioterrorism-related policy and response planning; and the programs that agencies have developed to provide assistance to state and local governments are similar and potentially duplicative. The President recently took steps to improve oversight and coordination, including the creation of the Office of Homeland Security. Over 40 federal departments and agencies have some role in combating terrorism, and coordinating their activities is a significant challenge. We identified over 20 departments and agencies as having a role in preparing for or responding to the public health and medical consequences of a bioterrorist attack. Appendix III, which is based on the framework given in the Terrorism Incident Annex of the Federal Response Plan, shows a sample of the coordination efforts by federal departments and agencies with responsibilities for the public health and medical consequences of a bioterrorist attack, as they existed prior to the recent creation of the Office of Homeland Security. This figure illustrates the complex relationships among the many federal departments and agencies involved. Departments and agencies use several approaches to coordinate their activities on terrorism, including interagency response plans, work groups, and formal agreements. Interagency plans for responding to a terrorist incident help outline agency responsibilities and identify resources that could be used during a response. For example, the Federal Response Plan provides a broad framework for coordinating the delivery of federal disaster assistance to state and local governments when an emergency overwhelms their ability to respond effectively. The Federal Response Plan also designates primary and supporting federal agencies for a variety of emergency support operations. For example, HHS is the primary agency for coordinating federal assistance in response to public health and medical care needs in an emergency. HHS could receive support from other agencies and organizations, such as DOD, USDA, and FEMA, to assist state and local jurisdictions. Interagency work groups are being used to minimize duplication of funding and effort in federal activities to combat terrorism. For example, the Technical Support Working Group is chartered to coordinate interagency research and development requirements across the federal government in order to prevent duplication of effort between agencies. The Technical Support Working Group, among other projects, helped to identify research needs and fund a project to detect biological agents in food that can be used by both DOD and USDA. Formal agreements between departments and agencies are being used to share resources and knowledge. For example, CDC contracts with the Department of Veterans Affairs (VA) to purchase drugs and medical supplies for the National Pharmaceutical Stockpile because of VA’s purchasing power and ability to negotiate large discounts. Overall coordination of federal programs to combat terrorism is fragmented. For example, several agencies have coordination functions, including DOJ, the FBI, FEMA, and the Office of Management and Budget. Officials from a number of the agencies that combat terrorism told us that the coordination roles of these various agencies are not always clear and sometimes overlap, leading to a fragmented approach. We have found that the overall coordination of federal research and development efforts to combat terrorism is still limited by several factors, including the compartmentalization or security classification of some research efforts.The Gilmore Panel also concluded that the current coordination structure does not provide for the requisite authority or accountability to impose the discipline necessary among the federal agencies involved. The multiplicity of federal assistance programs requires focus and attention to minimize redundancy of effort. Table 1 shows some of the federal programs providing assistance to state and local governments for emergency planning that would be relevant to responding to a bioterrorist attack. While the programs vary somewhat in their target audiences, the potential redundancy of these federal efforts highlights the need for scrutiny. In our report on combating terrorism, issued on September 20, 2001, we recommended that the President, working closely with the Congress, consolidate some of the activities of DOJ’s OJP under FEMA. We have also recommended that the federal government conduct multidisciplinary and analytically sound threat and risk assessments to define and prioritize requirements and properly focus programs and investments in combating terrorism. Such assessments would be useful in addressing the fragmentation that is evident in the different threat lists of biological agents developed by federal departments and agencies. Understanding which biological agents are considered most likely to be used in an act of domestic terrorism is necessary to focus the investment in new technologies, equipment, training, and planning. Several different agencies have or are in the process of developing biological agent threat lists, which differ based on the agencies’ focus. For example, CDC collaborated with law enforcement, intelligence, and defense agencies to develop a critical agent list that focuses on the biological agents that would have the greatest impact on public health. The FBI, the National Institute of Justice, and the Technical Support Working Group are completing a report that lists biological agents that may be more likely to be used by a terrorist group working in the United States that is not sponsored by a foreign government. In addition, an official at USDA’s Animal and Plant Health Inspection Service told us that it uses two lists of agents of concern for a potential bioterrorist attack. These lists of agents, only some of which are capable of making both animals and humans sick, were developed through an international process. According to agency officials, separate threat lists are appropriate because of the different focuses of these agencies. In our view, the existence of competing lists makes the assignment of priorities difficult for state and local officials. Fragmentation is also apparent in the composition of groups of federal agencies involved in bioterrorism-related planning and policy. Officials at the Department of Transportation (DOT) told us that that even though the nation’s transportation centers account for a significant percentage of the nation’s potential terrorist targets, the department was not part of the founding group of agencies that worked on bioterrorism issues and has not been included in bioterrorism response plans. DOT officials also told us that the department is supposed to deliver supplies for FEMA under the Federal Response Plan, but it was not brought into the planning early enough to understand the extent of its responsibilities in the transportation process. The department learned what its responsibilities would be during the TOPOFF 2000 exercise, which simulated a release of a biological agent. In May 2001, the President asked the Vice President to oversee the development of a coordinated national effort dealing with WMDs. At the same time, the President asked the Director of FEMA to establish an Office of National Preparedness to implement the results of the Vice President’s effort that relate to programs within federal agencies that address consequence management resulting from the use of WMDs. The purpose of this effort is to better focus policies and ensure that programs and activities are fully coordinated in support of building the needed preparedness and response capabilities. In addition, on September 20, 2001, the President announced the creation of the Office of Homeland Security to lead, oversee, and coordinate a comprehensive national strategy to protect the country from terrorism and respond to any attacks that may occur. These actions represent potentially significant steps toward improved coordination of federal activities. Our recent report highlighted a number of important characteristics and responsibilities necessary for a single focal point, such as the proposed Office of Homeland Security, to improve coordination and accountability. Nonprofit research organizations, congressionally chartered advisory panels, government documents, and articles in peer-reviewed literature have identified concerns about the preparedness of states and local areas to respond to a bioterrorist attack. These concerns include insufficient state and local planning for response to terrorist events, a lack of hospital participation in training on terrorism and emergency response planning, questions regarding the timely availability of medical teams and resources in an emergency, and inadequacies in the public health infrastructure. In our view, there are weaknesses in three key areas of the public health infrastructure: training of health care providers, communication among responsible parties, and capacity of laboratories and hospitals, including the ability to treat mass casualties. Questions exist regarding how effectively federal programs have prepared state and local governments to respond to terrorism. All 50 states and approximately 255 local jurisdictions have received or are scheduled to receive at least some federal assistance, including training and equipment grants, to help them prepare for a terrorist WMD incident. In 1997, FEMA identified planning and equipment for response to nuclear, biological, and chemical incidents as areas in need of significant improvement at the state level. However, an October 2000 research report concluded that even those cities receiving federal aid are still not adequately prepared to respond to a bioterrorist attack. Inadequate training and planning for bioterrorism response by hospitals is a major problem. The Gilmore Panel concluded that the level of expertise in recognizing and dealing with a terrorist attack involving a biological or chemical agent is problematic in many hospitals. A recent research report concluded that hospitals need to improve their preparedness for mass casualty incidents. Local officials told us that it has been difficult to get hospitals and medical personnel to participate in local training, planning, and exercises to improve their preparedness. Local officials are also concerned about whether the federal government could quickly deliver enough medical teams and resources to help after a biological attack. Agency officials say that federal response teams, such as Disaster Medical Assistance Teams, could be on site within 12 to 24 hours. However, local officials who have deployed with such teams say that the federal assistance probably would not arrive for 24 to 72 hours. Local officials also told us that they were concerned about the time and resources required to prepare and distribute drugs from the National Pharmaceutical Stockpile during an emergency. Partially in response to these concerns, CDC has developed training for state and local officials in using the stockpile and will deploy a small staff with the supplies to assist the local jurisdiction with distribution. Components of the nation’s public health system are also not well prepared to detect or respond to a bioterrorist attack. In particular, weaknesses exist in the key areas of training, communication, and hospital and laboratory capacity. It has been reported that physicians and nurses in emergency rooms and private offices, who will most likely be the first health care workers to see patients following a bioterrorist attack, lack the needed training to ensure their ability to make observations of unusual symptoms and patterns. Most physicians and nurses have never seen cases of certain diseases, such as smallpox or plague, and some biological agents initially produce symptoms that can be easily confused with influenza or other, less virulent illnesses, leading to a delay in diagnosis or identification. Medical laboratory personnel require training because they also lack experience in identifying biological agents such as anthrax. Because it could take days to weeks to identify the pathogen used in a biological attack, good channels of communication among the parties involved in the response are essential to ensure that the response proceeds as rapidly as possible. Physicians will need to report their observations to the infectious disease surveillance system. Once the disease outbreak has been recognized, local health departments will need to collaborate closely with personnel across a variety of agencies to bring in the needed expertise and resources. They will need to obtain the information necessary to conduct epidemiological investigations to establish the likely site and time of exposure, the size and location of the exposed population, and the prospects for secondary transmission. However, past experiences with infectious disease response have revealed a lack of sufficient and secure channels for sharing information. Our report last year on the initial West Nile virus outbreak in New York City found that as the public health investigation grew, lines of communication were often unclear, and efforts to keep everyone informed were awkward, such as conference calls that lasted for hours and involved dozens of people. Adequate laboratory and hospital capacity is also a concern. Reductions in public health laboratory staffing and training have affected the ability of state and local authorities to identify biological agents. Even the initial West Nile virus outbreak in 1999, which was relatively small and occurred in an area with one of the nation’s largest local public health agencies, taxed the federal, state, and local laboratory resources. Both the New York State and the CDC laboratories were inundated with requests for tests, and the CDC laboratory handled the bulk of the testing because of the limited capacity at the New York laboratories. Officials indicated that the CDC laboratory would have been unable to respond to another outbreak, had one occurred at the same time. In fiscal year 2000, CDC awarded approximately $11 million to 48 states and four major urban health departments to improve and upgrade their surveillance and epidemiological capabilities. With regard to hospitals, several federal and local officials reported that there is little excess capacity in the health care system in most communities for accepting and treating mass casualty patients. Research reports have concluded that the patient load of a regular influenza season in the late 1990s overtaxed primary care facilities and that emergency rooms in major metropolitan areas are routinely filled and unable to accept patients in need of urgent care. We found that federal departments and agencies are participating in a variety of research and preparedness activities that are important steps in improving our readiness. Although federal departments and agencies have engaged in a number of efforts to coordinate these activities on a formal and informal basis, we found that coordination between departments and agencies is fragmented. In addition, we remain concerned about weaknesses in public health preparedness at the state and local levels, a lack of hospital participation in training on terrorism and emergency response planning, the timely availability of medical teams and resources in an emergency, and, in particular, inadequacies in the public health infrastructure. The latter include weaknesses in the training of health care providers, communication among responsible parties, and capacity of laboratories and hospitals, including the ability to treat mass casualties. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-7118. Barbara Chapman, Robert Copeland, Marcia Crosse, Greg Ferrante, Deborah Miller, and Roseanne Price also made key contributions to this statement. We identified the following federal departments and agencies as having responsibilities related to the public health and medical consequences of a bioterrorist attack: USDA – U.S. Department of Agriculture APHIS – Animal and Plant Health Inspection Service ARS – Agricultural Research Service FSIS – Food Safety Inspection Service OCPM – Office of Crisis Planning and Management DOC – Department of Commerce NIST – National Institute of Standards and Technology DOD – Department of Defense DARPA – Defense Advanced Research Projects Agency JTFCS – Joint Task Force for Civil Support National Guard U.S. Army DOE – Department of Energy HHS – Department of Health and Human Services AHRQ – Agency for Healthcare Research and Quality CDC – Centers for Disease Control and Prevention FDA – Food and Drug Administration NIH – National Institutes of Health OEP – Office of Emergency Preparedness DOJ – Department of Justice FBI – Federal Bureau of Investigation OJP – Office of Justice Programs DOT – Department of Transportation USCG – U.S. Coast Guard Treasury – Department of the Treasury USSS – U.S. Secret Service VA – Department of Veterans Affairs EPA – Environmental Protection Agency FEMA – Federal Emergency Management Agency Figure 1, which is based on the framework given in the Terrorism Incident Annex of the Federal Response Plan, shows a sample of the coordination activities by these federal departments and agencies, as they existed prior to the recent creation of the Office of Homeland Security. This figure illustrates the complex relationships among the many federal departments and agencies involved. The following coordination activities are represented on the figure: OMB Oversight of Terrorism Funding. The Office of Management and Budget established a reporting system on the budgeting and expenditure of funds to combat terrorism, with goals to reduce overlap and improve coordination as part of the annual budget cycle. Federal Response Plan – Health and Medical Services Annex. This annex to the Federal Response Plan states that HHS is the primary agency for coordinating federal assistance to supplement state and local resources in response to public health and medical care needs in an emergency, including a bioterrorist attack. Informal Working Group – Equipment Request Review. This group meets as necessary to review equipment requests of state and local jurisdictions to ensure that duplicative funding is not being given for the same activities. Agreement on Tracking Diseases in Animals That Can Be Transmitted to Humans. This group is negotiating an agreement to share information and expertise on tracking diseases that can be transmitted from animals to people and could be used in a bioterrorist attack. National Medical Response Team Caches. These caches form a stockpile of drugs for OEP’s National Medical Response Teams. Domestic Preparedness Program. This program was formed in response to the National Defense Authorization Act of Fiscal Year 1997 (P.L. 104-201) and required DOD to enhance the capability of federal, state, and local emergency responders regarding terrorist incidents involving WMDs and high-yield explosives. As of October 1, 2000, DOD and DOJ share responsibilities under this program. Office of National Preparedness – Consequence Management of WMD Attack. In May 2001, the President asked the Director of FEMA to establish this office to coordinate activities of the listed agencies that address consequence management resulting from the use of WMDs. Food Safety Surveillance Systems. These systems are FoodNet and PulseNet, two surveillance systems for identifying and characterizing contaminated food. National Disaster Medical System. This system, a partnership between federal agencies, state and local governments, and the private sector, is intended to ensure that resources are available to provide medical services following a disaster that overwhelms the local health care resources. Collaborative Funding of Smallpox Research. These agencies conduct research on vaccines for smallpox. National Pharmaceutical Stockpile Program. This program maintains repositories of life-saving pharmaceuticals, antidotes, and medical supplies that can be delivered to the site of a biological (or other) attack. National Response Teams. The teams constitute a national planning, policy, and coordinating body to provide guidance before and assistance during an incident. Interagency Group for Equipment Standards. This group develops and maintains a standardized equipment list of essential items for responding to a terrorist WMD attack. (The complete name for this group is the Interagency Board for Equipment Standardization and Interoperability.) Force Packages Response Team. This is a grouping of military units that are designated to respond to an incident. Cooperative Work on Rapid Detection of Biological Agents in Animals, Plants, and Food. This cooperative group is developing a system to improve on-site rapid detection of biological agents in animals, plants, and food. Bioterrorism: Public Health and Medical Preparedness (GAO-02-141T, Oct. 9, 2001). Bioterrorism: Coordination and Preparedness (GAO-02-129T, Oct. 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01-915, Sept. 28, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Terrorism Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-666T, May 1, 2001). Combating Terrorism: Observations on Options to Improve the FederalResponse (GAO-01-660T, Apr. 24, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, Mar. 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01-14, Nov. 30, 2000). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, Sept. 11, 2000). Combating Terrorism: Linking Threats to Strategies and Resources (GAO/T-NSIAD-00-218, July 26, 2000). Chemical and Biological Defense: Observations on Nonmedical Chemical and Biological R&D Programs (GAO/T-NSIAD-00-130, Mar. 22, 2000).
Federal research and preparedness activities related to bioterrorism center on detecting of such agents; developing new or improved vaccines, antibiotics, and antivirals; and developing performance standards for emergency response equipment. Preparedness activities include: (1) increasing federal, state, and local response capabilities; (2) developing response teams; (3) increasing the availability of medical treatments; (4) participating in and sponsoring exercises; (5) aiding victims; and (6) providing support at special events, such as presidential inaugurations and Olympic games. To coordinate their activities, federal agencies are developing interagency response plans, participating in various interagency work groups, and entering into formal agreements with each other to share resources and capabilities. However, GAO found that coordination of federal terrorism research, preparedness, and response programs is fragmented, raising concerns about the ability of states and localities to respond to a bioterrorist attack. These concerns include poor state and local planning and the lack of hospital participation in training on terrorism and emergency response planning. This report summarized a September 2001 report (GAO-01-915).
Contractor protective forces—including 2,339 unionized officers and their 376 nonunionized supervisors—are not uniformly managed, organized, staffed, trained, or compensated across the six DOE sites we reviewed. For example, we found the following: Three different types of protective force contracts are in use. These contract types influence how protective force operations are overseen by federal officials and how protective force operations are coordinated with other site operations. The size of sites’ protective forces ranges from 233 to 533 uniformed, unionized officers, and the composition of these forces and their associated duties and responsibilities vary based on their categorization. Protective forces are divided into four categories: Security Officer (SO): Responsible for unarmed security duties such as checking for valid security badges. SOs represent about 5 percent of total unionized protective forces. Security Police Officer-I (SPO-I): Primarily responsible for protecting fixed posts during combat. SPO-Is represent about 34 percent of total unionized protective forces. These types of contracts include (1) direct contracts between protective force contractors and DOE or NNSA; (2) a component of management and operating (M&O) contracts between M&O contractors and DOE or NNSA; and (3) subcontracts between an M&O contractor and a protective force contractor. SPO-II: Primarily responsible for mobile combat to prevent terrorists from reaching their target but can also be assigned to fixed posts. SPO- IIs represent about 39 percent of total unionized protective forces. SPO-III: Primarily responsible for mobile combat and special response skills, such as those needed to recapture SNM (on site) and recover SNM (off site) if terrorists succeed in acquiring it. SPO-IIIs are usually organized into special response teams, and SPO-IIIs represent about 19 percent of total unionized protective forces. Each protective force has uniformed, nonunionized supervisors, but the duties, responsibilities, and ranks of these supervisors are generally site specific and not detailed in DOE’s protective force policies. DOE policy mandates certain protective force training but allows sites some flexibility in implementation. For example, newly hired protective forces must complete DOE’s Basic Security Police Officer Training class, but these courses, offered by each of the sites we reviewed, range in length from 9 to 16 weeks. In addition, we found that one site had largely completed the implementation of most aspects of the TRF initiative, but others are not expecting to do so until the end of fiscal year 2011. Pay, based on the site and the category of protective forces, ranges from nearly $19 per hour to over $26 per hour. Overtime pay, accrued in different ways at the sites, and other premium pay, such as additional pay for night shifts and holidays, may significantly increase protective force pay. While all employers contributed to active protective force members’ medical, dental, and life insurance benefits, they differed in the amount of their contributions and in the retirement benefits they offered. In general, new hires were offered defined contribution plans, such as a 401(k) plan, that provides eventual retirement benefits that depend on the amount of contributions by the employer or employee, as appropriate, as well as the earnings and losses of the invested funds. At the time of our review, two sites offered new hires defined benefit plans that promised retirees a certain monthly payment at retirement. Two other sites had defined benefit plans that covered protective force members hired before a particular date but were not open to new hires. We found two primary reasons for these differences. First, protective forces at all six of the sites we reviewed operate under separate contracts and collective bargaining agreements. Second, DOE has a long-standing contracting approach of defining desired results and outcomes—such as effective security—instead of detailed, prescriptive guidance on how to achieve those outcomes. While creating some of the differences noted, this approach, as we have previously reported, allows security to be closely tailored to site- and mission-specific needs. Since its inception in 2005, TRF has raised concerns in DOE security organizations, among protective force contractors, and in protective force unions about the ability of protective forces—especially older individuals serving in protective forces—to continue meeting DOE’s weapons, physical fitness, and medical qualifications. As we reported in 2005, some site security officials recognized they would have to carefully craft career transition plans for protective force officers who may not be able to meet TRF standards. Adding to these concerns are DOE’s broader efforts to manage its long-term postretirement and pension liabilities for its contractors, which could have a negative impact on retirement eligibility and benefits for protective forces. In 2006, DOE issued its Contractor Pension and Medical Benefits Policy (Notice 351.1), which was designed to limit DOE’s long-term pension and postretirement liabilities. A coalition of protective force unions stated that this policy moved them in the opposite direction from their desire for early and enhanced retirement benefits. Concerns over TRF implementation and DOE’s efforts to limit long-term pension and postretirement liabilities contributed to a 44-day protective force strike at the Pantex Plant in 2007. Initially, Pantex contractor security officials designated all of the plant’s protective force positions as having to meet a more demanding DOE combatant standard, a move that could have disqualified a potentially sizable number of protective forces from duty. Under the collective bargaining agreement that was eventually negotiated in 2007, some protective forces were allowed to meet a less demanding combatant standard. DOE has also rescinded its 2006 Contractor Pension and Medical Benefits Policy. However, according to protective force union officials, failure to resolve issues surrounding TRF implementation and retirement benefits could lead to strikes at three sites with large numbers of protective forces—Pantex, the Savannah River Site, and Y-12—when their collective bargaining agreements expire in 2012. To manage its protective forces more effectively and uniformly, over the past decades DOE has considered two principal options—improving elements of the existing contractor system or creating a federal protective force. We identified five major criteria that DOE officials, protective force contractors, and union officials have used to assess the advantages and disadvantages of these options. Overall, in comparing these criteria against the two principal options, we found that neither contractor nor federal forces seems overwhelmingly superior, but each has offsetting advantages and disadvantages. Either option could result in effective and more uniform security if well-managed. However, we identified transitional problems with converting the current protective force to a federalized force. When assessing whether to improve the existing contractor system or federalize protective forces, DOE, protective force contractors, and union officials have used the following five criteria: A personnel system that supports force resizing and ensures high-quality protective force members. Greater standardization of protective forces across sites to more consistently support high performance and ready transfer of personnel between sites. Better DOE management and oversight to ensure effective security. Prevention or better management of protective force strikes. Containment of the forces’ costs within expected budgets. Evaluating the two principal options—maintaining the current security force structure or federalizing the security force—against these criteria, we found that if the forces are well-managed, either contractor or federal forces could result in effective and more uniform security for several reasons: First, both options have offsetting advantages and disadvantages, with neither option emerging as clearly superior. When compared with a possible federalized protective force, a perceived advantage of a contractor force is greater flexibility for hiring or terminating an employee to resize the forces; a disadvantage is that a contractor force can strike. In contrast, federalization could better allow protective forces to advance or laterally transfer to other DOE sites to meet protective force members’ needs or DOE’s need to resize particular forces, something that is difficult to do under the current contractor system. Second, a key disadvantage of the current contractor system, such as potential strikes for contractor forces, does not preclude effective operations if the security force is well-managed. For instance, a 2009 memo signed by the NNSA administrator stated that NNSA had demonstrated that it can effectively manage strikes through the use of replacement protective forces. Third, distinctions between the two options can be overstated by comparing worst- and best-case scenarios, when similar conditions might be realized under either option. For example, a union coalition advocates federalization to get early and enhanced retirement benefits, which are available for law enforcement officers and some other federal positions, to ensure a young and vigorous workforce. However, such benefits might also be provided to contractor protective forces. Reliably estimating the costs to compare protective force options proved difficult and precluded our detailed reporting on it. Since contractor and federal forces could each have many possible permutations, choosing any particular option to assess would be arbitrary. For example, a 2008 NNSA- sponsored study identified wide-ranging federalization options, such as federalizing all or some SPO positions at some or all facilities or reorganizing them under an existing or a new agency. In addition, DOE would have to decide on the hypothetical options’ key cost factors before it could reasonably compare costs. For example, when asked about some key cost factors for federalization, an NNSA Service Center official said that a detailed workforce analysis would be needed to decide whether DOE would either continue to use the same number of SPOs with high amounts of scheduled overtime or hire a larger number of SPOs who would work fewer overtime hours. Also, the official said that until management directs a particular work schedule for federalized protective forces, there is no definitive answer to the applicable overtime rules, such as whether overtime begins after 8 hours in a day. The amount of overtime and the factors affecting it are crucial to a sound cost estimate because overtime pay can now account for up to about 50 percent of pay for worked hours. If protective forces were to be federalized under existing law, the current forces probably would not be eligible for early and enhanced retirement benefits and might face a loss of pay or even their jobs. For example: According to officials at the Office of Personnel Management (OPM) and NNSA’s Service Center, if contractor SPOs were federalized under existing law, they would likely be placed into the federal security guard (GS-0085) job series. Although a coalition of unions has sought federalization to allow members to have early and enhanced retirement benefits, which allows employees in certain federal jobs to retire at age 50 with 20 years of service, federal security guards are not eligible for these benefits. Our analysis indicated transitioning protective force members may receive lower pay rates as federal security guards. Contractor force members receive top pay rates that could not generally be matched under the likely General Schedule pay grades. If protective forces were federalized, OPM officials told us that current members would not be guaranteed a federal job and would have to compete for the new federal positions; thus, they risk not being hired. Nonveteran protective force members are particularly at risk because competition for federal security guard positions is restricted to those with veterans’ preference, if they are available. According to OPM officials, legislation would be required to provide federal protective forces with early and enhanced retirement benefits because their positions do not fit the current definition of law enforcement officers that would trigger such benefits. However, if such legislation were enacted, these benefits’ usual provisions could create hiring and retirement difficulties for older force members. Older members might not be rehired because agencies are typically authorized to set a maximum age, often age 37, for entry into federal positions with early retirement. In addition, even if there were a waiver from the maximum age of hire, older protective forces members could not retire at age 50 because they would have had to work 20 years to meet the federal service requirement for “early” retirement benefits. These forces could retire earlier if they were granted credit for their prior years of service under DOE and NNSA contracts. However, OPM officials told us OPM would strongly oppose federal retirement benefits being granted for previous years of contractor service (retroactive benefits). According to these officials, these retroactive benefits would be without precedent and would violate the basic concept that service credit for retirement benefits is only available for eligible employment at the time it was performed. Moreover, retroactive benefits would create an unfunded liability for federal retirement funds. In a joint January 2009 memorandum, senior officials from NNSA and DOE rejected the federalization of protective forces as an option and supported the continued use of contracted protective forces—but with improvements. They concluded that, among other things, the transition to a federal force would be costly and would be likely to provide little, if any, increase in security effectiveness. However, these officials recognized that the current contractor system could be improved by addressing some of the issues that federalization might have resolved. In particular, they announced the pursuit of an initiative to better standardize protective forces’ training and equipment. According to these officials, more standardization serves to increase effectiveness, provide cost savings, and facilitate better responses to potential work stoppages. In addition, in March 2009, DOE commissioned a study group to recommend ways to overcome the personnel system problems that might prevent protective force members from working to a normal retirement age, such as 60 to 65, and building reasonable retirement benefits. In addition, NNSA established a Security Commodity Team to establish standardized procurement processes and to identify and test security equipment that can be used across sites. According to NNSA officials, NNSA established a common mechanism in December 2009 for sites to procure ammunition. In addition, to move toward more standardized operations and a more centrally managed protective force program, NNSA started a broad security review to identify possible improvements. As a result, according to NNSA officials in January 2010, NNSA has developed a draft standard for protective force operations, which is intended to clarify both policy expectations and a consistent security approach that is both effective and efficient. For the personnel system initiative to enhance career longevity and retirement options, in June 2009, the DOE-chartered study group made 29 recommendations that were generally designed to enable members to reach a normal retirement age within the protective force, take another job within DOE, or transition to a non-DOE career. The study group identified 14 of its 29 career and retirement recommendations as involving low- or no-cost actions that could conceivably be implemented quickly. For example, some recommendations call for reviews to find ways to maximize the number of armed and unarmed positions that SPOs can fill when they can no longer meet their current combatant requirements. Other recommendations focus on providing training and planning assistance for retirement and job transitions. The study group also recognized that a majority (15 out of 29) of its personnel system recommendations, such as enhancing retirement plans to make them more equivalent and portable across sites, may be difficult to implement largely because of budget constraints. Progress on the 29 recommendations had been limited at the time of our review. When senior department officials were briefed on the personnel system recommendations in late June 2009, they took them under consideration for further action but immediately approved one recommendation—to extend the life of the study group by forming a standing committee. They directed the standing committee to develop implementation strategies for actions that can be done in the near term and, for recommendations requiring further analysis, additional funding, or other significant actions, to serve as an advisory panel for senior department officials. According to a DOE official in early December 2009, NNSA and DOE were in varying stages of reviews to advance the other 28 recommendations. Later that month, NNSA addressed an aspect of one recommendation about standardization, in part by formally standardizing protective force uniforms. In the Conference Report for the fiscal year 2010 National Defense Authorization Act, the conferees directed the Secretary of Energy and the Administrator of the National Nuclear Security Administration to develop a comprehensive DOE-wide plan to identify and implement the recommendations of the study group. In closing, while making changes to reflect the post-9/11 security environment, DOE and its protective force contractors through their collective bargaining agreements have not successfully aligned protective force personnel systems—which affect career longevity, job transitions, and retirement—with the increased physical and other demands of a more paramilitary operation. Without better alignment, in our opinion, there is greater potential for a strike at a site, as well as potential risk to site security, when protective forces’ collective bargaining agreements expire. In the event of a strike at one site, the differences in protective forces’ training and equipment make it difficult to readily provide reinforcements from other sites. Even if strikes are avoided, the effectiveness of protective forces may be reduced if tensions exist between labor and management. These concerns have elevated the importance of finding the most effective approach to maintaining protective force readiness, including an approach that better aligns personnel systems and protective force requirements. At the same time, DOE must consider its options for managing protective forces in a period of budgetary constraints. With these considerations in mind, DOE and NNSA have recognized that the decentralized management of protective forces creates some inefficiencies and that some systemic career and longevity issues are not being resolved through actions at individual sites. NNSA’s standardization initiatives and recommendations made by a DOE study group offer a step forward. However, the possibility in 2012 of strikes at three of its highest risk sites makes it imperative, as recommended by our report and directed by the fiscal year 2010 National Defense Authorization Act, that DOE soon resolve the issues surrounding protective forces’ personnel system. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee have. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The September 11, 2001, terrorist attacks raised concerns about the security of Department of Energy (DOE) sites with weapons-grade nuclear material, known as Category I special nuclear material (SNM). To better protect these sites against attacks, DOE has sought to transform its protective forces protecting SNM into a Tactical Response Force (TRF) with training and capabilities similar to the U.S. military. This testimony is based on prior work and has been updated with additional information provided by protective forces' union officials. In a prior GAO report, Nuclear Security: DOE Needs to Address Protective Forces' Personnel System Issues (GAO-10-275), GAO (1) analyzed information on the management, organization, staffing, training, and compensation of protective forces at DOE sites with Category I SNM; (2) examined the implementation of TRF; and (3) assessed DOE's two options to more uniformly manage protective forces; and (4) reported on DOE's progress in addressing protective force issues. DOE generally agreed with the recommendations in GAO's prior report that called for the agency to fully assess and implement, where feasible, measures identified by DOE's 2009 protective forces study group to enhance protective forces' career longevity and retirement options. Over 2,300 contractor protective forces provide armed security for DOE and the National Nuclear Security Administration (NNSA) at six sites that have long-term missions to store and process Category I SNM. DOE protective forces at each of these sites are covered under separate contracts and collective bargaining agreements between contractors and protective force unions. As a result, the management, organization, staffing, training and compensation--in terms of pay and benefits--of protective forces vary. Protective force contractors, unions, and DOE security officials are concerned that the implementation of TRF's more rigorous requirements and the current protective forces' personnel systems threaten the ability of protective forces--especially older members--to continue their careers until retirement age. These concerns, heightened by broader DOE efforts to manage postretirement and pension liabilities for its contractors that might have a negative impact on retirement eligibility and benefits for protective forces, contributed to a 44-day protective force strike at an important NNSA site in 2007. According to protective force union officials, the issues surrounding TRF implementation and retirement benefits are still unresolved and could lead to strikes at three sites with large numbers of protective forces when their collective bargaining agreements expire in 2012. Efforts to more uniformly manage protective forces have focused on either reforming the current contracting approach or creating a federal protective force (federalization). Either approach might provide for managing protective forces more uniformly and could result in effective security if well-managed. However, if protective forces were to be federalized under existing law, the current forces probably would not be eligible for enhanced retirement benefits and might face a loss of pay or even their jobs. Although DOE rejected federalization as an option in 2009, it recognized that the current contracting approach could be improved by greater standardization and by addressing personnel system issues. As a result, NNSA began a standardization initiative to centralize procurement of equipment, uniforms, and weapons to achieve cost savings. Under a separate initiative, a DOE study group developed a number of recommendations to enhance protective forces' career longevity and retirement options, but DOE has made limited progress to date in implementing these recommendations.
A cyber incident can occur under many circumstances and for many reasons. It can be inadvertent, such as from the loss of an electronic device, or deliberate, such as from the theft of a device, or a cyber-based attack by a malicious individual or group, agency insiders, foreign nation, terrorist, or other adversary. Incidents have been reported at a wide range of public- and private-sector institutions, including federal, state, and local government agencies; educational institutions; hospitals and medical facilities; financial institutions; information resellers; retailers; and other types of businesses. Protecting federal systems and the information on them is essential because the loss or unauthorized disclosure or alteration of the information can lead to serious consequences and can result in substantial harm to individuals and the federal government. Specifically, ineffective protection of IT systems and information can result in threats to national security, economic well-being, and public health and safety; loss or theft of resources, including money and intellectual property; inappropriate access to and disclosure, modification, or destruction of sensitive information; use of computer resources for unauthorized purposes or to launch an attack on other computer systems; damage to networks and equipment; loss of public confidence; and high costs for remediation. While some cyber incidents can be resolved quickly and at minimal cost, others may go unresolved and incur exorbitant costs. Reported attacks and unintentional incidents involving federal systems such as those involving data loss or theft, computer intrusions, and privacy breaches underscore the importance of having strong security practices in place. In fiscal year 2013, US-CERT received notifications of 46,160 cyber incidents at all agencies and 43,931 incidents at the 24 major agencies. Cyber incidents reported by federal agencies increased in fiscal year 2013 significantly over the prior 3 years (see fig. 1), increasing almost 33 percent in the last 2 fiscal years. The following examples reported in 2013 illustrate that information and assets remain at risk. July 2013: Hackers stole a variety of personally identifiable information on more than 104,000 individuals from a Department of Energy system. Types of data stolen included Social Security numbers, birth dates and locations, bank account numbers, and security questions and answers. According to the department’s Inspector General, the combined costs of assisting affected individuals and lost productivity—due to federal employees being granted administrative leave to correct issues stemming from the breach—could be more than $3.7 million. June 2013: Edward Snowden, an employee of a contractor of the National Security Agency, disclosed classified documents through the media. In January 2014, the Director of National Intelligence testified, in his annual worldwide threat assessment, that insider threats will continue to pose a persistent challenge, as trusted insiders with the intent to do harm can exploit their access to compromise vast amounts of sensitive and classified information as part of a personal ideology or at the direction of a foreign government. June 2013: The Office of the Inspector General at the Department of Commerce reported that the department’s Economic Development Administration inaccurately identified a common malware infection as a sophisticated cyber attack by another country. To remedy the situation, according to the Office of Inspector General, the Economic Development Administration spent more than $2.7 million—more than half its fiscal year 2012 IT budget—on unnecessary incident response activities and destroyed more than $170,000 worth of IT components officials incorrectly thought to have been irrecoverably infected. The Office of Inspector General reported that a failure to adhere to the department’s incident handling procedures, a lack of experienced and qualified incident handlers, and a failure to coordinate incident handling activities all contributed to the mishandling of the incident. January 2013: A Romanian national was indicted in U.S. District Court for the Southern District of New York for allegedly running a “bulletproof hosting” service that enabled cyber criminals to distribute malicious software (malware) and conduct other sophisticated cybercrimes. Malware distributed by this hosting service had infected more than 1 million computers worldwide, including computers belonging to the National Aeronautics and Space Administration (NASA), causing tens of millions of dollars in losses to the affected individuals, businesses, and government entities. NASA’s Office of Inspector General and the Federal Bureau of Investigation are investigating this incident. FISMA sets up a layered framework for managing cyber risks and assigns specific responsibilities to (1) OMB, including to develop and oversee the implementation of policies, principles, standards, and guidelines for information security; to report, at least annually, on agency compliance with the act; and to approve or disapprove agency information security programs; (2) agency heads, including to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency; (3) agency heads and chief information officers, including to develop, document, and implement an agencywide information security program; (4) inspectors general, to conduct annual independent evaluations of agency efforts to effectively implement information security; and (5) NIST, to provide standards and guidance to agencies on information security. Organized, planned cyber incident response activities are essential in defending an information system and the information that resides on it from an accidental or malicious cyber incident. In addition, FISMA requires the establishment of a federal information security incident center to, among other things, provide timely technical assistance to agencies regarding cyber incidents. Each federal agency must also report annually to OMB, selected congressional committees, and the Comptroller General on the adequacy of its information security policies, procedures, practices, and compliance with requirements. In 2010, OMB transferred the operational aspects of its FISMA-mandated responsibilities for overseeing and assisting the cybersecurity efforts of federal agencies to DHS. Specifically, according to OMB, DHS activities are to include, but are not limited to: overseeing agencies’ cybersecurity operations and incident response and providing appropriate assistance; overseeing the governmentwide and agency-specific implementation of and reporting on cybersecurity policies and guidance; overseeing and assisting governmentwide and agency-specific efforts to provide adequate, risk-based, and cost-effective cybersecurity; overseeing agencies’ compliance with FISMA and developing analyses for OMB to assist in the development of the FISMA annual report; and annually reviewing agencies’ cybersecurity programs. Under presidential directive, DHS is also responsible for assisting public- and private-sector critical infrastructure owners and operators in preparing for, preventing, protecting against, mitigating from, responding to, and recovering from a cyber incident. NIST has responsibility for developing standards and guidelines for securing the information systems used or operated by a federal agency or contractor on behalf of an agency. NIST has issued three special publications (SP) that provide guidance to agencies for detecting and handling cyber incidents. NIST SP 800-61 specifies procedures for implementing FISMA incident handling requirements, and includes guidelines on establishing an effective incident response program and detecting, analyzing, prioritizing, and handling an incident. The specific steps outlined for a formal, focused, and coordinated response to a cyber incident include a plan that should be tailored to meet the unique requirements of the agency and lay out the necessary resources and management support. The incident response process that NIST outlines has four phases: preparation; detection and analysis; containment, eradication, and recovery; and post-incident activity. In preparing to respond to incidents, agencies should (1) develop and document policies, plans and procedures for appropriate incident handling guidance; (2) create and train an incident response team; (3) acquire the necessary tools and resources, such as those needed for analyzing incidents; and (4) periodically test their response capability to ensure it is working as intended. Upon detection of an incident, analysis is needed to determine the incident’s scope, such as affected systems, and potential impact to agency operations. These factors assist agencies in prioritizing response activities. In keeping with the severity of the incident, the agency can mitigate the impact of the incident by containing it and ultimately recovering from it. During this phase, activity often cycles back to detection and analysis—for example, to see if additional hosts have been infected by malware while eradicating a malware incident. After the incident has been managed, the agency may issue a report that details the cause and costs and the steps it should take to prevent a future incident. Policies, plans, procedures, as well as testing and training practices may require updates as lessons are learned throughout the various phases of response. In addition, NIST SP 800-53 identifies specific incident response control activities that parallel those in NIST SP 800-61 and that agencies should address in order to effectively respond to a cyber incident. These controls include, among others, (1) monitoring incident-handling activities (e.g., tracking and documenting incidents), (2) developing incident response policies and plans, (3) developing incident response procedures, (4) testing an agency’s incident response capability, and (5) training incident responders. NIST also provides guidelines on preventing malware agencies should respond to such an incident in an effective and efficient manner. Established in 2003, US-CERT is the federal information security incident center mandated by FISMA. US-CERT consults with agencies on cyber incidents, provides technical information about threats and incidents, compiles the information, and publishes it on its website, https://www.us-cert.gov/. Malware refers to a program that is inserted into a system, usually covertly, with the intent of compromising the confidentiality, integrity, or availability of the victim’s data, applications, or operating system or of otherwise annoying or disrupting the victim’s system. In addition, US-CERT defines seven categories of incidents for federal agencies to use in reporting an incident. Agencies are required to report incidents to US-CERT within specified time frames, such as within an hour or weekly or monthly, depending on the category of the incident. The categories and their time frames for reporting are listed in table 1. Based on our statistical sample of cyber incidents reported in fiscal year 2012, we estimate that the 24 agencies did not effectively or consistently demonstrate actions taken in response to a detected incident in about 65 percent of reported incidents. Agencies frequently documented their incident response actions for containing and eradicating incidents, but did not consistently demonstrate how they had handled incident response activities for the analysis, recovery, and post-incident phases. Further, although the 6 selected agencies we reviewed had developed policies, plans, and procedures to guide their incident response activities, such efforts were not comprehensive or consistent with federal requirements. NIST specifies that agencies should document incident response activities, including analysis, containment, eradication, and recovery, as well as post-incident activities.documented some required actions, they did not effectively demonstrate others. NIST SP 800-61 specifies that an initial analysis be performed to determine the type, nature, and scope of an incident, such as which networks, systems, or applications have been affected; who or what originated the incident; and what is taking place regarding the incident (e.g., what tools or attack methods are being used, what vulnerabilities are being exploited). According to NIST SP 800-61, agencies are to consider impact for prioritizing incident response activities, such as the functional impact of the incident—the current and likely future negative impact to business functions. Resource limitations at agencies are one of the factors emphasizing the need for them to prioritize their incident response activities. Further, by prioritizing the handling of incidents, agencies could identify situations of greater severity that demand immediate attention. The initial analysis of an incident should identify enough information for the team to prioritize subsequent activities, such as containment of the incident and a deeper analysis of the effects of the incident. Agencies determined and documented the scope of an incident—a key part of the analysis—for about 91 percent of incidents governmentwide.ineffective scoping practices, such as: Examples below illustrate both effective and In a malware incident, the affected agency involved determined that after infecting a computer with malware, an attacker compromised the computer’s local administrator account and used those credentials to successfully access another agency computer, which incident handlers then contained and remediated. In another incident, an agency received a report from US-CERT indicating that login credentials at two of the agency’s components may have been compromised. When contacting the impacted components, agency incident handlers mistyped the potentially compromised credentials for one component and did not respond to an e-mail from the component requesting clarification, and failed to follow up with the second component when it did not respond to the initial alert. Despite these errors, the incident handlers closed the incident without taking further action. In addition, most agencies did not consistently consider potential impact of incidents. Although the variance in our statistical sample was too great for us to project a percentage, 2 of the 6 selected agencies demonstrated that they had considered impact; the other 4 did not. In addition, 11 of the 24 agencies responding to our survey reported that they did not categorize the functional impact (e.g., low, moderate, and high) to their agency. Agencies risk ineffective and more costly incident response if they do not account for an incident’s impact. NIST SP 800-61 states that an agency can minimize the impact of an incident by containing it, and emphasizes the importance of containing an incident before it overwhelms resources or increases damages. Containment strategies vary according to the type of incident. For example, an incident involving a lost mobile device could involve sending the device commands that will delete its data and permanently disable it, and then cancelling its access to mobile phone networks. A malware incident could be contained by physically or logically quarantining infected computers, preventing the malware from spreading over the network or communicating with the attacker who initially placed the malware. Our sample indicates that agencies demonstrated that they had contained the majority of their cyber incidents. Specifically, our analysis shows that agencies had recorded actions to halt the spread of, or otherwise limit, the damage caused by an incident in about 75 percent of incidents governmentwide. However, agencies did not demonstrate such actions for about 25 percent of incidents governmentwide. For example: In an incident involving a lost iPhone, the device’s mobile service was disabled before a “kill” command could be sent to the device, meaning incident handlers were unable to remotely delete e-mails and other data in its memory, potentially leaving the data exposed to anyone who found the device. In a malware incident, sensors on an agency’s network recorded an agency computer contacting an external domain known to host malicious files, and downloading a suspicious file. Incident handlers closed the ticket without recording any actions taken to contain or otherwise remediate the potential malware infection. Although agencies demonstrated that they had contained most of the incidents, those that were not effectively contained could increase the risk of the incident spreading and causing greater damage to their operating environments. According to NIST SP 800-61, after an incident has been contained, eradication may be necessary to eliminate components of the incident, such as deleting malware and disabling breached user accounts, and identifying and mitigating all vulnerabilities that have been exploited. During eradication, it is important to identify all affected hosts within the agency so that they can be remediated. For some incidents, eradication is either not necessary or is performed during recovery. For example, after a lost mobile device has been remotely disabled and had its data deleted and network connectivity severed, incident handlers cannot take further actions regarding that mobile device. In the case of a minor malware incident, the malware could be removed from the system when the infected host has been removed from service or has had its hard drive wiped and its operating system and applications reinstalled. Our sample indicates that agencies demonstrated that they completed their eradication steps for the majority of cyber incidents. Specifically, our analysis shows that for about 77 percent of incidents governmentwide, the agencies had identified and eliminated the remaining elements of the incident. However, agencies did not demonstrate that they had effectively eradicated incidents in about 23 percent of incidents. For example: In a malware incident, incident handlers noted that they had requested the creation of network blocks to isolate the infected computer and the collection of its hard drive for analysis, but the ticket had not been updated to indicate whether the incident handlers had performed the requested actions or any subsequent actions. After an administrative password was exposed to one facility’s user population, incident handlers removed the password from the location where it had been posted, but did not indicate that they had changed the password to prevent users who had already seen it from using it. Although agencies demonstrated that they had eradicated most of the incidents, those that were not effectively eradicated could increase the risk that components of an incident might still remain in the operating environment and cause damage. According to NIST SP 800-61, in recovering from an incident, system administrators restore systems to normal operation, confirm that the systems are functioning normally, and (if applicable) remediate vulnerabilities to prevent a similar incident. Recovery may involve actions such as restoring systems from clean backups, rebuilding systems from scratch, and replacing compromised files with clean versions. NIST states that, during recovery, the agency should remediate vulnerabilities to prevent a similar incident from reoccurring (this could include, but is not limited to, installing patches, changing passwords, tightening network perimeter security, user education, adding or enhancing security controls, changing system configurations, etc.). Agencies generally demonstrated the steps they took in restoring systems to normal operations. Specifically, our analysis shows that agencies returned their systems to an operationally ready state for about 81 percent of incidents governmentwide. However, they had not consistently documented remedial actions on whether they had taken steps to prevent an incident from reoccurring. Specifically, agencies did not demonstrate that they had acted to prevent an incident from reoccurring in about 49 percent of incidents governmentwide. For example: In a malware incident, incident handlers determined that a laptop belonging to an agency employee on travel was infected with malware, and was targeting other agency employees. While incident handlers contained the incident by quarantining the machine and blocking the remote sites it was communicating with, they noted that further actions could not be taken until the user had returned from travel. Incident handlers did not document what, if any, action, they took when the employee returned. In an incident involving the leak of personally identifiable information, the information of seven agency employees was posted on a third- party website. The data included name, addresses, phone numbers, partial credit card information, mother’s name, e-mail addresses, and password. However, the agency did not document actions it took to determine how the leak had occurred, or how to prevent similar leaks from reoccurring. Incident handlers sent e-mails to the responsible component 31 times over a period exceeding 4 months, requesting status updates and confirmation that the component had taken remedial actions before the incident was eventually closed in the department’s tracking system. If incident recovery steps are not completed, agencies cannot be assured that they have taken all steps necessary to reduce the risk of similar incidents reoccurring and ensure that their systems will operate optimally. In its incident response guide, NIST states certain post-incident data can be used to improve the handling of future incidents. Lessons learned and reports from post-incident meetings can be used to update policies and procedures, such as when post-incident analysis reveals a missing step or inaccuracy in a procedure. Data such as the total hours of involvement and the cost may be used to justify additional funding of the incident response team. After handling an incident, an agency should also issue a report that details the cost of the incident, among other information. Agencies generally updated policies or procedures but did not consistently capture the costs of responding to an incident. Officials at 19 of the 24 agencies surveyed reported that their agency had amended policies or procedures as the result of a cyber incident. However, collection of cost data by agencies varied. Specifically, such information was recorded by only 1 of the selected 6 agencies we reviewed. In addition, 12 of 24 agencies surveyed reported that they had captured the costs of responding to an incident. Without this information, agencies may be unaware of the costs of responding to an incident and lack the information necessary for improving their response in a cost-effective manner. NIST states that, to facilitate effective and efficient incident response, agencies should develop corresponding policies, plans, procedures, and practices. However, selected agencies’ policies, plans, and procedures did not always include key information. NIST SP 800-61 states that policies are necessary for the effective implementation of a response to a cyber incident. Policies should identify the roles, responsibilities, and levels of authority for those implementing incident response activities. In addition, policies should address the prioritization of incidents, an activity that NIST deems to be a critical decision point in the process of handling an incident, and that handling should be prioritized based on factors such as the incident’s impact to the organization. Agencies’ policies should also address performance measures,response. which can help evaluate the effectiveness of the incident As shown in table 2, the six selected agencies’ policies did not always address each of three key elements defined by NIST. Roles, responsibilities, and levels of authority. Policies for two of the six selected agencies addressed roles, responsibilities, and levels of authority for incident response. Specifically, DOT’s cybersecurity policy tasked its Computer Security Incident Response Center with responsibility for implementing and monitoring incident handling for the agency and assigned roles for leading components’ incident response planning to individual coordinators. Similarly, NASA’s information security handbook specified the authorities of the incident response manager, who may, for example, decide to eradicate an incident without shutting down the system. Policies for DOE, DOJ, HUD, and VA partially defined the roles, responsibilities, and levels of authority for responding to cyber incidents. For example, while DOJ’s policy defines roles and responsibilities, the agency did not include information on who had authority to confiscate equipment and did not describe when an incident should be escalated. In addition, VA’s policies defined roles and responsibilities, but did not include authorities for the incident response team. HUD’s policy addressed roles, responsibilities, and levels of authority, but the policy was still in draft at the time of our review. If levels of authority are not clearly defined, agencies risk ineffective incident response, since personnel may be unsure of their responsibilities in responding to an incident. Prioritize severity ratings of incidents. Policies for two of the six selected agencies fully addressed the prioritization of incidents. For example, NASA’s handbook specified that, as part of prioritizing the handling of an incident, the following should be considered: the incident’s categorization, information sensitivity, the system’s categorization, and the impact to the system or mission. Conversely, policies for DOE, DOT, and HUD did not address the prioritizing of incidents and DOJ partially addressed it. For example, DOJ’s policy addressed the prioritizing of incidents affecting classified systems but not for unclassified systems. Agencies risk an ineffective response if they do not consider an incident’s impact, since incidents having the most effect on an agency or its mission may not be addressed in a timely manner. Establish performance measures. One of the six selected agencies addressed the establishment of performance measures. DOJ listed several objectives for measuring incident response, such as limiting an incident’s duration, minimizing impact to the department’s operations, and requiring annual tests of the department’s incident response capability. Policies for DOE, DOT, HUD, NASA, and VA did not address any measures of performance. Without such measures, agencies may lack the information needed to evaluate the effectiveness of their incident response. NIST SP 800-61 states that incident response plans should be developed to provide guidance for implementing incident response practices based on the agency’s policies. Further, NIST states the plan should be approved by senior management to indicate their support for the plan. The plan should also include and define metrics for measuring and evaluating the effectiveness of incident response. According to NIST, one such example would be “the total amount of labor spent working on the incident.” measuring and determining whether their incident response is effective. FISMA requires agencies to develop procedures for responding to an incident. NIST SP 800-61 also states that, in addition to being based on incident response policies, such procedures should provide detailed steps for responding to an incident and cover all phases of the incident response process. According to NIST, following standardized responses as listed in procedures should minimize errors resulting from “stressful” incident handling situations. NIST lists several types of incident response procedures that agencies should develop. These include procedures for containing an incident that detail how incident handlers should contain specific types of incidents in a manner that meets the agency’s definition of acceptable risk and procedures for prioritizing incident handling, which allow incident handlers to more quickly determine how best to apply their resources based on risk. As shown in table 4, selected agencies did not always develop procedures for responding to incidents, as NIST suggests. Procedures for containing incidents. Five of the six selected agencies developed procedures for containing incidents. For example, DOJ developed procedures for handling e-mails with malicious content and procedures for blocking potential malicious IP addresses. Similarly, DOT’s incident response group’s standard operating procedures identify procedures for handling key logging software, which can record keystrokes and capture sensitive information such as usernames and passwords. However, DOE procedures partially addressed the containing of incidents. For example, while the department had not developed procedures for containing incidents, two DOE components had developed such procedures. Without procedures for containing incidents, incident response personnel may not have instructions necessary to prevent incidents from negatively affecting other parts of their operating environment. Procedures for prioritizing incidents. Two of the six selected agencies developed and documented procedures for prioritizing the handling of incidents. NASA listed eight factors for determining the priority of handling an incident. Each of the factors is to be assigned a rating, after which the ratings for each factor would be added together to determine a number that would then be mapped to a priority ranging from low to critical. In addition, VA developed procedures for prioritizing incidents where a matrix would be used to map the type of incident to a predefined priority, such as critical, high, medium, and low, for handling the incident. Procedures for HUD and DOE partially addressed this activity since their procedures did not specify whether risk or impact would determine incident handling priorities. The remaining two of the six agencies (i.e., DOJ and DOT) had not developed and documented procedures for prioritizing incidents. As a result, these agencies may not be addressing incidents affecting the agency in the most risk-effective manner. NIST SP 800-53 states that agencies are to test their incident response capability, at an agency-defined frequency, for their information systems to determine the effectiveness of procedures for responding to cyber incidents. Agencies should also train personnel in their incident response roles and responsibilities. According to NIST, the lack of a well-trained and capable staff could result in inefficient incident detection and analysis and costly mistakes. As shown in table 5, agencies did not test their incident response capabilities or consistently train staff responsible for responding to incidents. Tested incident response capability. Four of the six agencies had not tested their incident response capability and two—DOE and DOJ —partially tested their incident response capabilities. For example, DOE did not demonstrate that the department had conducted an entitywide test of its incident response capability and only provided information concerning a review of a key component’s incident response activities. In addition, components at DOJ are responsible for testing their own incident response capability, with 10 of the 13 agency components completing testing of their capabilities. If an agency’s incident response capability has not been tested, the agency will have limited assurance its controls have been effectively implemented. Trained incident response personnel. Three of the six agencies trained their incident response personnel. For example, both DOJ and HUD maintained a list of personnel who were responsible for responding to their department’s incidents. These lists included the dates staff received training and the type of training received. DOT also trained their incident response personnel. However, VA did not demonstrate that their incident response personnel had received training, and DOE and NASA partially addressed this activity. For example, NASA provided a detailed listing of incident response personnel and the types of training they had taken, but did not define what qualified as acceptable training. If staff do not receive training on their incident response roles, they may not have the knowledge or skills to ensure they are prepared to effectively respond to cyber incidents affecting their agency. Inconsistencies in agencies’ performance of incident response activities and development of policies, plans, and procedures indicate that further oversight, such as that provided by OMB’s and DHS’s CyberStat review process, may be warranted. CyberStat reviews are in-depth sessions with National Security Staff, OMB, DHS, and an agency to discuss that agency’s cybersecurity posture and discuss opportunities for collaboration. According to OMB, these reviews were face-to-face, evidence-based meetings to ensure agencies were accountable for their cybersecurity posture and to assist them in developing focused strategies for improving their information security posture in areas where they faced challenges. According to DHS, the goal for fiscal year 2013 was for all 24 major agencies to be reviewed. However, this goal was not met. DHS officials stated that the reviews were conducted with 7 federal agencies, and that interviews were conducted with chief information officers from the other 17 agencies. In addition, the current CyberStat reviews have not generally covered agencies’ cyber incident response practices, such as considering impact to aid in prioritizing incident response activities, recording key steps in responding to an incident, and documenting the costs for responding to an incident. DHS officials told us that, regarding incident response, the reviews discussed the status of agencies’ closing of incidents and trends surrounding incident reporting; however, the reviews did not address evaluating the incident response practices of the agencies. Without addressing response practices in these reviews, OMB and DHS may be missing opportunities to help agencies improve their information security posture and more effectively respond to cyber incidents. While DHS provides various services to agencies to assist them in addressing cyber incidents, opportunities exist to improve the usefulness of these services, according to the 24 agencies we surveyed. DHS components, including US-CERT, offer services that assist agencies in preparing to handle incidents, maintain awareness of the current threat environment, and deal with ongoing incidents. Based on responses to our survey, officials at 24 major agencies were generally satisfied with DHS’s service offerings, although they identified improvements they believe would make certain services more useful, such as improving reporting requirements. For its part, US-CERT does not evaluate the effectiveness of its incident services. US-CERT serves as the central federal information security incident center mandated by FISMA. By law, the center is required to provide timely technical assistance to operators of agency information systems regarding security incidents, compile and analyze information about incidents that threaten information security, inform operators of agency information systems about current and potential information security threats and vulnerabilities, and consult with NIST and agencies operating national security systems regarding security incidents. More broadly, OMB has transferred responsibility to DHS for the operational aspects of federal cybersecurity, including overseeing and assisting federal agencies’ cybersecurity operations and incident response. Table 6 lists DHS cyber incident assistance services. The results of our survey indicate that agency officials were generally satisfied with the services provided to them by DHS, and they offered various opinions about DHS services or noted dissatisfaction with incident reporting requirements. Of the agency officials that used services provided to them by DHS, as illustrated in figure 2, the majority were generally satisfied, finding the service to be very or moderately useful. In addition, officials from 16 of the 24 agencies reported that they were generally satisfied with DHS’s outreach efforts to inform them of cyber incident services and assistance, while 4 of the 24 officials reported that they were generally dissatisfied. However, surveyed officials at 11 of the 24 agencies noted dissatisfaction with incident reporting requirements. Agency officials made the following comments: Time frames are difficult to meet. The incident categories are no longer practical. Attributes that contribute to classification are not unique between the categories and it allows for too much discretion and interpretation. The categories are long overdue for updates. A category that separates data loss from unauthorized access would be beneficial. A category specific to phishing and advanced persistent threats would be helpful. Add a category for non-incident. Additionally, each category should have sub-categories to further identify the incident and how it happened. These comments are consistent with the results of a review we conducted in 2013.revise reporting requirements to DHS for personally identifiable information-related data breaches, including time frames that would better reflect the needs of individual agencies and the government as a whole. DHS officials provided information about actions the agency plans to take to help address our recommendations and stated that it has interacted with OMB regarding requirements specific to these recommendations and is preparing new incident reporting guidance for agencies. We and othersmeasures that demonstrate results. Such measures support an agency’s efforts to plan, reinforce accountability, and advance the agency’s mission. have noted the value of having clear performance However, US-CERT has not established measures to evaluate the effectiveness of the cyber incident assistance it provides to agencies. US- CERT gathers usage statistics and feedback on its public website and portal and uses those data to identify opportunities for improving those services, but it only performs these reviews on an ad-hoc basis. For its other activities, a US-CERT official stated that the agency gathers monthly statistics on activities such as the number of on-site or remote technical assistance engagements it performs each month, or the number of pieces of malware analyzed by staff. The official noted, however, that these numbers are driven by factors outside of US-CERT’s control, and as such, indicate activity levels rather than performance measures and that the agency is still trying to identify meaningful performance measures. However, without results-oriented performance measures, US-CERT will face challenges in ensuring it is effectively assisting federal agencies with preparing for and responding to cyber incidents. With federal agencies facing increasing and more threatening cyber incidents, it is essential for them to be able to effectively manage their response activities. However, agencies did not consistently demonstrate that they responded to cyber incidents in an effective manner. Although agencies often demonstrated that they carried out various aspects of incident response activities, documenting all of the steps taken to analyze, contain, eradicate, and recover from incidents are important actions for agencies to take to ensure that incidents are being appropriately addressed. Having comprehensive policies, plans, and procedures that include measures of performance and guidance on impact assessment provide key elements necessary for agencies to effectively respond to cyber incidents. Testing the incident response program and ensuring employees are appropriately trained increases the assurance that controls are in place to prevent, detect, or respond to incidents. Further, capturing related costs could help agencies more efficiently manage their incident response activities. OMB and DHS have established CyberStat reviews to improve information security at federal agencies, but the reviews have not focused on agencies’ incident response practices. Although DHS and US-CERT offer numerous services to agencies to assist with cyber incidents, US-CERT does not have a process in place to evaluate the effectiveness of the assistance that it provides agencies. Without results-oriented performance measures, US-CERT will face challenges in ensuring that it is effectively assisting federal agencies with preparing for and responding to cyber incidents. To improve the effectiveness of governmentwide cyber incident response activities, we recommend that the Director of OMB and Secretary of Homeland Security address agency incident response practices governmentwide, in particular through CyberStat meetings, such as emphasizing the recording of key steps in responding to an incident. To improve the effectiveness of cyber incident response activities, we are making 25 recommendations to six selected agencies to improve their cyber incident response programs. We recommend that the Secretary of Energy: revise policies for incident response to include requirements for defining the incident response team’s level of authority, prioritizing the severity ratings of incidents based on impact and establishing measures of performance; revise the department’s incident response plan to include metrics for measuring the incident response capability and its effectiveness; develop incident response procedures that provide instructions for establish clear requirements to ensure the department’s incident containing incidents and revise procedures for incident response to prioritize the handling of incidents by impact; fully test the department’s incident response capability; and response personnel are trained. We recommend that the Attorney General of the United States: revise policies for incident response by including requirements for defining the incident response team’s level of authority, and prioritizing the severity ratings of incidents for unclassified systems, based on impact; revise the department’s incident response plan to include quantifiable metrics for measuring the incident response capability and its effectiveness; develop incident response procedures that provide instructions for prioritizing the handling of incidents by impact; and ensure that all components test their incident response capability. We recommend that the Secretary of Transportation: revise policies for incident response by including requirements for prioritizing the severity ratings of incidents based on impact and establishing measures of performance; revise the department’s incident response plan to include senior management’s approval, and metrics for measuring the incident response capability and its effectiveness; develop incident response procedures that provide instructions for prioritizing the handling of incidents by impact; and test the department’s incident response capability. We recommend that the Secretary of Housing and Urban Development: finalize policies for incident response and include in those policies requirements for prioritizing the severity ratings of incidents and establishing measures of performance; develop a departmentwide incident response plan that includes, among other elements, senior management’s approval, and metrics for measuring the incident response capability and its effectiveness; revise procedures for incident response to prioritize the handling of incidents by impact; and test the department’s incident response capability. We recommend that Administrator of the National Aeronautics and Space Administration: revise policies for incident response by including requirements for establishing measures of performance; revise the agency’s incident response plan to include metrics for measuring the incident response capability and its effectiveness; test the agency’s incident response capability; and establish clear requirements for training the agency’s incident response personnel. We recommend that the Secretary of Veterans Affairs: revise policies for incident response by including requirements for defining the incident response team’s level of authority, and establishing measures of performance; revise the department’s incident response plan to include metrics for measuring the incident response capability and its effectiveness; test the department’s incident response capability; and train the department’s incident response personnel per the agency’s requirements. To improve the cyber incident response assistance provided to federal agencies, we recommend that the Secretary of Homeland Security: establish measures to evaluate the effectiveness of the cyber incident assistance it provides to agencies. We sent draft copies of this report to the six agencies selected for our sample, as well as to DHS and OMB. We received written responses from DOE, DHS, HUD, NASA and VA. These comments are reprinted in appendices II through VI. The audit liaisons for DOJ and DOT responded via e-mail. However, OMB did not provide comments to our draft report. Six of the eight agencies generally concurred with our recommendations. Five agencies (DOE, DHS, DOJ, HUD, and VA) concurred with all of our recommendations. NASA agreed with three of four draft recommendations and partially agreed with the fourth recommendation. DOT responded that the department had no comments. In cases where these agencies also provided technical comments, we have addressed them in the final report as appropriate. DOE, DHS, NASA, and VA also provided information regarding specific actions they have taken or plan on taking that address portions of our recommendations. Further, DHS, NASA, and VA provided estimated timelines for completion of actions that would address our recommendations. NASA agreed with our three recommendations to revise its incident response policy, revise its incident response plan, and test the agency’s incident response capability. In addition, it partially concurred with our recommendation that the agency establish clear requirements for training its incident response personnel. The Chief Information Officer stated that agency personnel were being trained in their response roles and responsibilities. He added that his office would define what qualified as acceptable training for incident response personnel and that his office would then update policy to reflect the need for focused incident response training. We believe these actions, if effectively implemented, will satisfy our recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Departments of Energy, Homeland Security, Housing and Urban Development, Justice, Transportation, and Veterans Affairs, as well as the National Aeronautics and Space Administration and the Office of Management and Budget. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Gregory C. Wilshusen at (202) 512-6244. I can also be reached by e-mail at wilshuseng@gao.gov. Key contributors to this report are listed in appendix VII. Our objectives were to evaluate the extent to which (1) federal agencies are effectively responding to cyber incidents and (2) the Department of Homeland Security (DHS) provides cyber incident assistance to agencies. To address our first objective, we reviewed the Federal Information Security Management Act (FISMA), National Institute of Standards and Technology (NIST) Special Publication 800-53 Revision 3, Special Publication 800-61 Revision 2, Office of Management and Budget (OMB) OMB-06-19, and United States Computer Emergency Readiness Team (US-CERT) guidance to determine the key steps agencies should address when responding to a cyber incident. We then used a two-stage cluster sample to identify a generalizable sample of incidents to review for compliance with key steps. First, we selected 6 agencies from the population of 24 major agencies covered by the Chief Financial Officers Act, using probability proportionate to the number of cyber incidents those agencies had reported to US-CERT in fiscal year 2012, divided by 32,442—the total number of cyber incidents reported to US-CERT in fiscal year 2012—sampling without replacement. The 6 agencies selected were the Departments of Energy (DOE), Justice (DOJ), Housing and Urban Development (HUD), Transportation (DOT), Veterans Affairs (VA), and the National Aeronautics and Space Administration (NASA). After selecting the 6 agencies in the first stage of sampling, we then obtained for each agency the list of individual cyber incidents for fiscal year 2012. From those lists, we then randomly selected 40 cyber incidents within each agency, for a total sample size of 240 cyber incidents. This statistical sample allowed us to project the results, with 95 percent confidence, to the 24 major agencies. Table 7 lists the number of incidents in our sample in each of the six US-CERT-defined incident categories. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. To determine the reliability and accuracy of the data we used to develop our sample, we interviewed knowledgeable agency officials and reviewed related documentation on internal controls for US-CERT’s database of incident tickets and reviewed the data for duplicates and outliers. For the incident data in our sample, we interviewed officials at the six agencies in our sample, reviewed each agency’s incident management system to gain an understanding of the data, reviewed related documentation on internal controls for each agency’s incident management system, and traced a random sample of records back to source agency documents and tested the fields for accuracy. Our sample results capture estimates for the extent of duplicate records, false positives, and inaccurately recorded data fields. Based on this assessment, we determined that the data were sufficiently reliable for our work. To address the effectiveness with which agencies responded to a cyber incident, we reviewed documents (extracted from agencies’ incident tracking systems) covering the incidents in our sample to determine the extent to which the agencies had performed analysis, containment, eradication, recovery, reporting, and post-incident procedures in accordance with federal requirements and guidance and their own policies and procedures. In addition, we reviewed and analyzed the six selected agencies’ cyber incident response policies, plans, procedures, and practices and compared them to key elements in NIST guidance; and interviewed agency officials to discuss their incident response practices. We also conducted a web-based survey of officials responsible for cyber incident response at the 24 major federal agencies. After we drafted the questionnaire, we asked for comments from independent GAO survey professionals, and we conducted two in-person pretests to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could be obtained, and (5) the survey was comprehensive and unbiased. We chose the pretest participants to include one member of our survey population, and one official from a federal agency not in our population, but who had a similar role and responsibilities with regard to incident response. We made changes to the content and format of the questionnaire after the review and both pretests, based on the feedback we received. We received completed questionnaires from all 24 agencies surveyed. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff who had subject matter expertise. Then, we pretested the draft questionnaire with a number of officials to ensure that the questions were relevant, clearly stated, and easy to understand. When we analyzed the data, an independent analyst checked all computer programs. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, eliminating the need to key data into a database. To address our second objective, we reviewed DHS documents, reviewed US-CERT’s public-facing website and limited-access portal, and interviewed officials at DHS about the services it offers to agencies to support their incident response capabilities and activities. In addition, as part of our web-based survey, we asked officials at the agencies what incident response-related services or assistance they had sought from DHS, and their opinion of those services and the utility of US-CERT’s public website and limited-access portal. In addition, we interviewed agency officials from the six agencies selected as part of our random sample regarding their interactions with DHS in receiving cyber incident assistance. We compared the assistance provided by DHS, including US- CERT, to the requirements specified in FISMA. Further, we met with officials to determine whether the department had measures—such as those described by us and others—to evaluate the effectiveness of the assistance they provided to agencies. We conducted this performance audit from February 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jeffrey Knott (assistant director), Carl Barden, Larry Crosland, Kristi Dorsey, Nancy Glover, Wilfred Holloway, Kendrick Johnson, Stuart Kaufman, Tyler Mountjoy, Justin Palk, and Minette Richardson made key contributions to this report.
The number of cyber incidents reported by federal agencies increased in fiscal year 2013 significantly over the prior 3 years (see figure). An effective response to a cyber incident is essential to minimize any damage that might be caused. DHS and US-CERT have a role in helping agencies detect, report, and respond to cyber incidents. GAO was asked to review federal agencies' ability to respond to cyber incidents. To do this, GAO reviewed the extent to which (1) federal agencies are effectively responding to cyber incidents and (2) DHS is providing cybersecurity incident assistance to agencies. To do this, GAO used a statistical sample of cyber incidents reported in fiscal year 2012 to project whether 24 major federal agencies demonstrated effective response activities. In addition, GAO evaluated incident response policies, plans, and procedures at 6 randomly-selected federal agencies to determine adherence to federal guidance. GAO also examined DHS and US-CERT policies, procedures, and practices, and surveyed officials from the 24 federal agencies on their experience receiving incident assistance from DHS. Twenty-four major federal agencies did not consistently demonstrate that they are effectively responding to cyber incidents (a security breach of a computerized system and information). Based on a statistical sample of cyber incidents reported in fiscal year 2012, GAO projects that these agencies did not completely document actions taken in response to detected incidents in about 65 percent of cases (with 95 percent confidence that the estimate falls between 58 and 72 percent). For example, agencies identified the scope of an incident in the majority of cases, but frequently did not demonstrate that they had determined the impact of an incident. In addition, agencies did not consistently demonstrate how they had handled other key activities, such as whether preventive actions to prevent the reoccurrence of an incident were taken. Although all 6 selected agencies that GAO reviewed in depth had developed parts of policies, plans, and procedures to guide their incident response activities, their efforts were not comprehensive or fully consistent with federal requirements. In addition, the Office of Management and Budget (OMB) and the Department of Homeland Security (DHS) conduct CyberStat reviews, which are intended to help federal agencies improve their information security posture, but the reviews have not addressed agencies' cyber incident response practices. Without complete policies, plans, and procedures, along with appropriate oversight of response activities, agencies face reduced assurance that they can effectively respond to cyber incidents. DHS and a component, the United States Computer Emergency Readiness Team (US-CERT), offer services that assist agencies in preparing to handle cyber incidents, maintain awareness of the current threat environment, and deal with ongoing incidents. Officials from the 24 agencies GAO surveyed said that they were generally satisfied with the assistance provided, and made suggestions to make the services more useful, such as improving reporting requirements. Although US-CERT receives feedback from agencies to improve its services, it has not yet developed performance measures for evaluating the effectiveness of the assistance it provides to agencies. Without results-oriented performance measures, US-CERT will face challenges in ensuring it is effectively assisting federal agencies with preparing for and responding to cyber incidents. GAO is making recommendations to OMB and DHS to address incident response practices governmentwide, particularly in CyberStat meetings with agencies; to the heads of six agencies to strengthen their incident response policies, plans, and procedures; and to DHS to establish measures of effectiveness for the assistance US-CERT provides to agencies. The agencies generally concurred with GAO's recommendations.
In 2001, DOD conducted missile defense reviews to determine how best to fulfill the nation’s need to defend the United States, deployed forces, allies, and friends from ballistic missile attacks. The findings of these reviews led the Secretary of Defense to declare the need for a new strategy to acquire and deploy missile defenses and to issue direction in January 2002 to improve the leadership, management, and organization of missile defense activities. Specifically, the Secretary delegated to MDA the authority to manage all ballistic missile defense systems under development and shifted programs being executed or developed by the military services to MDA. Figure 1 below describes some of the missile defense programs whose execution or development was transferred from the military services into MDA. The Secretary also instructed MDA to develop a single integrated system, to be called the Ballistic Missile Defense System, capable of intercepting enemy missiles launched from all ranges and in all phases of their flight. The systems transferred from or executed by the services and new systems whose development MDA initiates are considered to be elements of the BMDS and are managed by MDA. In 2002, drawing on research and development efforts that were ongoing for years, MDA established the Command, Control, Battle Management, and Communications system as an element to provide connectivity between other BMDS elements and to manage their operation as an integrated, layered missile defense system. In his direction to MDA and the military services, the Secretary called for a capabilities-based requirements process and an evolutionary development program. In a capabilities-based program, the system developer—MDA— designs a system based on the technology available, rather than designing a system to meet requirements established by those that will use the system. Additionally, in an evolutionary program, a baseline capability is developed that is improved over time. Therefore, the BMDS has no fixed design or final architecture. Each evolution, or block, as MDA calls such increments, is meant to take advantage of advancing technology so that over time the BMDS is enhanced. MDA’s capabilities-based evolutionary approach to development is meant to provide a capability to the users as quickly as possible while also maintaining flexibility. MDA is in the process of developing the first BMDS block, which is known as Block 2004. This block consists of the Ground-Based Midcourse Defense, Aegis Ballistic Missile Defense, Patriot Advanced Capability–3, and Command, Control, Battle Management, and Communications elements, as well as the Forward-Based X-Band Radar. The Secretary also established a procedure for making developmental assets available for operational use. On the basis of assessments of the BMDS’s military utility, progress in development, and a recommendation by the Director, MDA, and the military services, the Secretary, with input from the DOD Senior Executive Council, decides whether assets whose development is ongoing should be fielded. When such a decision is made, the Secretary directed that the military departments provide forces to support the early fielding and budget resources to procure and operate the planned force structure. In December 2002, the President directed DOD to begin fielding an initial set of missile defense capabilities to meet the near- term ballistic missile threat to our nation. MDA responded by emplacing Block 2004 developmental assets for use against limited attacks. However, the Secretary has not yet activated this capability by placing it on alert. The Secretary’s 2002 direction intended that acquisition of missile defense elements and components be completed in three phases. In the first phase, MDA develops ballistic missile defense elements and components using research, development, test, and evaluation funds. When appropriate, the MDA Director recommends and the Senior Executive Council approves the entry of an element or major component into the second phase, known as the transition phase. This phase allows the military services to prepare for the element’s or component’s transfer. During the third phase, a military service—using procurement, operation and maintenance, and personnel funds—procures, operates, and sustains the element or component. Figure 2 includes some of the activities, such as those carried out by the Joint Air and Missile Defense Organization (JTAMDO) that DOD envisioned taking place during each of the three phases. Military services begin full-scale production. Military services formalize capability-based Operational Requirements Document for element/component being transferred. Military services operate and maintain element or major component. Military services lead effort to assess element's operational suitability. Combatant Commanders conduct and assess BMDS exercises. MDA and user community address logistics and maintenance support. Military services support operational test and evaluation. Combatant Commanders and military services identify desired operational capabilities for future increments. Finally, the Secretary’s 2002 direction effectively allowed MDA to defer application of many of the requirements that are generally applied to the development of major systems under DOD’s traditional acquisition system regulations. For example, the requirements for acquisition program baselines and independent cost estimates, generally applicable by statute to major defense acquisition programs and implemented by the DOD regulations, will not be applied until a BMDS element or component is transferred to a military service concurrent with Milestone C. Milestone C, the point at which a decision is made to begin initial production, is the point at which the service is to assume management and funding responsibility for an element or component of the BMDS. Once elements or components are transferred, the Secretary directed MDA to continue to fund modifications to fielded systems and to manage development activities for new missile defense capabilities. The Secretary also gave MDA approval authority over any engineering changes that the military services might want to make to transferred BMDS elements. This process, known as configuration control, is meant to ensure that changes do not degrade the interoperability of the BMDS. MDA has recommended and DOD approved the transfer of one missile defense element to a military service since 2002. DOD transferred the Patriot Advanced Capability–3 program to the Army in 2003. MDA continues to exercise configuration control and provide funding for the development of Patriot Advanced Capability-3 missile defense-related upgrades. In December 2002, the Under Secretary of Defense for Acquisition, Technology and Logistics established criteria for deciding when to transfer acquisition responsibility from MDA to the military services. The specified criteria are (1) testing demonstrates that an element or component is mature, (2) plans and resources are in place to ensure that facilities are available to support production, and (3) funds are programmed in DOD’s Future Years Defense Program to carry out production plans. After the Under Secretary established these criteria, one BMDS element—the Patriot Advanced Capability-3—was transferred to a military service. However, officials across DOD now recognize that the transfer criteria are neither complete nor clear and believe that revised criteria are needed for deciding to move an element or component into the transition phase. These officials told us that when the Under Secretary established transfer criteria in 2002, DOD did not fully understand the complexity of the BMDS and how it could affect transfer decisions. MDA’s Director testified earlier this year that MDA will use several models to transfer system elements to the military services and that it may not be appropriate to transfer some elements or components. In such cases, he envisions the services and MDA sharing responsibilities for the assets. Further, he said that MDA will continue to work with the Secretary of Defense, the military services, and the Combatant Commanders to arrange appropriate transfers on a case-by-case basis. There is currently uncertainty as to when and under what conditions DOD will transfer management and funding responsibility for elements and major components from MDA to the military services. The acquisition model directed by the Secretary in 2002 is now viewed by many in DOD as needing modifications to meet the evolving needs of a complex ballistic missile defense system. Although MDA began to emplace Block 2004 developmental assets for the warfighters’ potential use, it is not ready to transfer management responsibility for some of these assets to the military services. According to officials in MDA’s Business Management Office, continued management of some system elements and components by MDA may be necessary to fully develop the overall effectiveness of the BMDS. For example, if the missile-tracking capability of the Space Tracking and Surveillance System is going to be added to the BMDS, MDA will need to test it with other BMDS elements to determine how to make all elements work together most effectively. To do this, MDA believes it must have the authority to pull back elements or components that are fielded so that the elements and components can be utilized in developmental efforts. The MDA officials also indicated that full transfer of elements and components could threaten the priority that the President and DOD have given to missile defense. The officials told us that the military services could subordinate missile defense missions to service missions, funding service programs at the expense of the missile defense program. Service acquisition officials and officials in the Office of the Secretary of Defense agreed that the military services have many competing priorities and that should missile defense programs be transferred to a service, those programs would likely have to compete with service programs for procurement, operations, and sustainment funds. Officials in MDA’s transition office offered examples of how management and funding responsibility of elements and components currently in development might be handled. Management responsibility for some elements and components might never be transferred to a military service because these assets are not integrated on service platforms or do not perform core service missions. Examples include the Cobra Dane radar, the Forward-Based X-Band radars, and the Sea-Based X-Band radar. MDA officials suggested that these components could be operated by either contractors or military personnel, and MDA might fund their operation and sustainment. However, discussions are still ongoing as to whether these components will eventually be transferred to the military services. MDA and a military service might be collaboratively involved in the management of other assets, such as the Airborne Laser, the Kinetic Energy Interceptor, the Space Tracking and Surveillance System, and Terminal High Altitude Area Defense because these elements are not yet technically mature and MDA needs to manage their development. The services will remain closely involved to provide feedback on the development process. As the capability of these elements is ready to be demonstrated, MDA will acquire them in limited quantities. For example, MDA plans to acquire two Terminal High Altitude Area Defense fire units, which include 48 missiles. If early tests are successful, MDA will turn the first fire unit over to the Army in 2009. The Army will operate it and provide feedback on its performance. Once any of these assets are available for operational use, MDA believes that the services should accept some responsibility for funding their operation and sustainment costs. Officials in MDA’s transition office told us that management responsibility for assets in this group may eventually be handed over to a military service. The officials said that the transition status of an element is a function of technical maturity, programmatic achievement, time, and relative stakeholder involvement. Management and funding responsibility for other systems already have or likely will be transitioned to a military service because they have reached or are nearing technical maturity. As mentioned above, MDA transferred responsibility for the Patriot Advanced Capability-3 to the Army in 2003, and it is likely that in the future MDA will transfer responsibility for Aegis Ballistic Missile Defense to the Navy. Officials in MDA’s transition office told us that Aegis Ballistic Missile Defense is reaching technical maturity, as demonstrated by its being fielded operationally on Navy ships. The Navy is almost certain to accept responsibility for the Aegis missile defense capability because it is mounted on the Aegis ships. Service acquisition officials told us that they need sufficient notice to prepare for a transfer and enough time to ensure that funds are available to produce, operate, and sustain the system. Several things have to be done for a service to operate and maintain a system. For example, personnel have to be assigned and trained, a command structure has to be organized, and facilities may have to be provided for the system and its operators. Also, because transferred elements of the BMDS will enter DOD’s acquisition cycle at Milestone C, other activities have to be completed in advance of the milestone to ensure compliance with DOD acquisition regulations. For example, the documentation required by the Chairman of the Joint Chiefs of Staff Capabilities Integration and Development System must be completed and an independent cost estimate must be obtained. Service officials estimated that it takes at least a year and a half to complete all of the tasks needed to meet Milestone C requirements of the DOD acquisition regulations. Sufficient advance notice is also needed for budgeting purposes. One DOD official said that until responsibilities are established and transition plans are in place, it is difficult for the services to plan their budgets. If transfers take place with little advance notice, DOD will either have to provide the services with additional funds for the production, operation, and sustainment of BMDS elements or direct the services to support the BMDS assets with funds reserved for service missions. In written comments on a draft of this report, DOD said that there is no basis to presume that programs will transfer from MDA to the services with insufficient notice because of the process established by the Secretary and described above. Early in 2005, an Integrated Product Team was established to develop transition plans. The team’s mission is to specify management and funding responsibilities for MDA and the military work out a strategy for establishing doctrine, planning an organizational structure and its leadership, developing training and materiel, and providing personnel and facilities; provide appropriate notification for service budget requirements; establish configuration control procedures; and ensure mission success. The team has conducted three meetings to date at the colonel and captain level and two at the general officer level. The inaugural meeting of colonels and captains was held on January 21, 2005. It was attended by almost 80 people who represented MDA, the Office of the Secretary of Defense, the military services, the U.S. Strategic Command, and the U.S. Northern Command. An MDA executive official chairs the team. Two more meetings (one at each level) are planned, along with numerous meetings of support working groups. Officials in MDA’s transition office told us that the team will draw up a broad plan, but it will include annexes tailored for each individual element or component. These annexes will specify the likely date that the element or component under consideration will be transferred; identify how MDA, the affected military service, and the combatant commander will share responsibilities; provide the status of existing contracts; identify funding requirements; and lay out tasks and milestones in the transfer process. MDA transition office officials also told us that the annexes may propose handovers from MDA to the services that are not as formal as the transfers originally envisioned by the Secretary of Defense. Each individual transition plan will be cosigned by MDA’s Director and a military service representative. However, DOD officials noted that the team will likely have disputes that can only be decided by officials in the Office of the Secretary of Defense. DOD and service acquisition officials expressed concern that although the Integrated Product Team members may be able to plan transition details, they likely will not be empowered to make major decisions or resolve major impasses. However, MDA transition office officials told us that the team’s objective is to secure agreement of transition and transfer plans at the lowest level possible. The Deputy for Ballistic Missile Defense, Missile Warfare Division, within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, said that the current plan is to have the Missile Defense Support Group recommend solutions for impasses to the Under Secretary of Defense for Acquisition, Technology and Logistics. The Under Secretary would then consider the support group’s recommendations, make any needed changes, and forward all transition/transfer plans to the Secretary of Defense for approval. According to the Deputy, the goal is to have DOD approve all transfer plans by December 31, 2005, so that direction is available to the appropriate DOD components as they begin preparing their 2008-2013 budgets. In July 2005, the Director, Joint Staff, directed the Joint Staff’s Deputy for Force Protection to establish a team to recommend revised criteria for making transfer decisions. The team members told us that the impetus for their study was the Integrated Product Team’s difficulties in determining when and under what conditions military services should take responsibility for some BMDS components. They said that the military services are not eager to receive components, such as the Sea-Based X- Band Radar, Forward-Based X-Band Radar, and the Cobra Dane Radar, that do not provide a capability that furthers the military services’ core missions. The team, which expects to complete its work by December 31, 2005, expects to work with the Integrated Product Team and the Missile Defense Support Group. In 2002, the Secretary of Defense directed the military services to budget the resources to procure and operate the planned force structure for an early missile defense capability. However, MDA and the military services continue to disagree as to which organization should pay, after 2005, for operating and sustaining developmental assets even though the assets may be available for operational use. Additionally, DOD has not yet determined the full cost of procuring, operating, and sustaining the BMDS from 2006 through 2011, and it has not included all known costs in its budget. Until DOD decides which organization will fund these costs, the services will likely continue to provide only the funding that they are directed to make available, and some needs, which neither MDA nor the services have planned for, will probably go unfunded. Additionally, if the funds budgeted for some purposes, such as logistical support for the BMDS, turn out to be insufficient, DOD will either have to take funds from other programs or spend less on missile defense. DOD reports that it will spend $68.5 billion between fiscal years 2005 and 2011 to develop, acquire, and support missile defense capabilities, including an initial capability emplaced in 2004-2005 that can be used in the event of an emergency. MDA has been authorized by statute to use research and development funds for this purpose. Table 1 identifies the DOD components that have budgeted funds for missile defense activities through 2011. In fiscal year 2005, MDA budgeted $1.5 billion of its research and development funds to acquire interceptors and radars and to upgrade various BMDS elements or components. It expects to continue to acquire and upgrade BMDS assets through 2011. Table 2 shows planned funding by fiscal year. A complete list of all assets that MDA is acquiring during Block 2004 and plans to acquire or enhance from 2006-2011 is provided in table 3. Although the elements or components shown in table 3 will be available to provide an increased missile defense capability, officials within MDA’s transition office told us that responsibility for acquiring them will not be transferred to a military service. For example, MDA is acquiring two Terminal High Altitude Area Defense fire units, including 48 missiles. The fire units will be made available to the Army so that soldiers can operate Terminal High Altitude Area Defense to provide feedback on its development and to defend against short- and medium-range ballistic missiles in the event of an emergency. Should the Army, or any other military service that has received a developmental asset, need additional units of an element or larger quantities of some components—for example, should the Army need more Terminal High Altitude Area Defense fire units or missiles—the officials suggested that the military service should be responsible for acquiring them. In addition, MDA would expect the services to budget funds for any common support equipment required for the elements that MDA is acquiring. For example, according to MDA’s Terminal High Altitude Area Defense Program Office, it expects the Army to purchase trucks needed to move the two Terminal High Altitude Area Defense fire units’ radar, launchers, and generators. However, no military service has budgeted funds for procurement of elements or components, and only the Air Force has included funds in its budget for support equipment. An official in the Air Force’s Missile Warning and Defense Office within the Office of the Deputy Chief of Staff for Air and Space Operations told us that the Air Force included approximately $59 million in its fiscal year 2006-2011 budgets to acquire and sustain devices that detect incursions at Vandenberg Air Force Base and to improve test equipment for upgraded early-warning radars located at Beale (California) Air Force Base and at Fylingdales Air Force Station in the United Kingdom. However, the official told us that the cost of acquiring and sustaining the detection devices and the test equipment is expected to exceed planned funding. Further information on Air Force officials’ concerns with MDA’s plan for funding procurements is discussed in appendix 1. While the Army has not budgeted funds for support equipment, it has provided equipment from inventory to support the Ground-based Midcourse Defense element that MDA has emplaced at Fort Greely. An official from the Army’s Air and Missile Defense/Space Division within the Office of the Assistant Secretary for Acquisition, Logistics and Technology told us that the Army, Army National Guard, and National Guard Bureau provided equipment, such as trucks, radios, and machine guns, from inventory to support the Ground-Based Midcourse Defense element. Additionally, pending Terminal High Altitude Area Defense test results and Senior Executive Council decisions, the official told us that the Army expects to include funds in its fiscal year 2008-2013 budgets for Terminal High Altitude Area Defense common support equipment. The military services are currently paying for most of the personnel who operate the missile defense assets. For example, an Army National Guard unit operates Ground-Based Midcourse Defense components located at Fort Greely, and Navy sailors operate the Aegis Ballistic Missile Defense element. The cost to the military services of operating these missile defense elements is not easily discernable because it is intermingled with other operation and sustainment costs. However, Army officials told us that the Army is providing about $2.4 million for missile defense operations in fiscal year 2005 and expects to incur an additional cost of $23.3 million for this purpose between fiscal years 2006 and 2011. Navy officials told us that at this time the missile defense mission does not create additional personnel cost because the same sailors who stand watch in the combat information center to support conventional anti-air warfare missions also support the ballistic missile defense mission. Additionally, the Air Force has not identified any additional personnel cost between 2006 and 2011 to operate upgraded early warning radar for the missile defense mission. Officials in MDA’s transition office told us that in the future MDA may use some of its research and development funds to operate major components that are bought in small quantities. The officials suggested that components such as the Forward-Based X-Band and Sea-Based X-Band radar, which may never be transferred to a military service, could be operated by contractor personnel who, at least through 2011, would be paid from funds set aside for contractor logistics support. In fiscal year 2005, MDA and the military services shared sustainment costs. These costs are incurred for (1) logistics support, which includes the services and materiel needed to support the fielded BMDS; (2) installation support and services costs, which are all of the additional costs incurred by an installation (or base) to support a resident tenant; and (3) other supplies, such as fuel and lubricants. Sustainment costs are generally one of the largest contributors to a weapon’s life-cycle cost because weapon systems are usually in the field for years and require support during this time. Together, operation, maintenance, and disposal costs typically account for about 72 percent of the total cost of a weapon system. However, MDA does not believe that this percentage can be used to estimate the sustainment cost of BMDS elements or components because MDA Program Officials expect fielded assets will be updated and improved more quickly than standard DOD weapon systems. If this proves true, an element or component may be in the field for only a few years before it is replaced with an enhanced configuration. But regardless of the length of time each configuration is in use, DOD will incur sustainment cost because each configuration must be sustained. In December 2003, DOD’s Program Decision Memorandum III directed MDA to assume all fiscal year 2005 and 2006 costs for materials and services needed to support the operation of primary BMDS mission equipment, critical spares, and standard military equipment. MDA is paying prime contractors, who are developing the elements that will be available for limited use, to provide this support in fiscal year 2005. For example, MDA has contracted with the Boeing Company to provide logistics support for the Ground-Based Midcourse Defense element. Transition office officials told us that they plan to continue this arrangement through 2011. However, MDA cannot be sure that the funds set aside for logistics support will provide all of the material and services needed. Reliability and maintainability are key factors in the design of affordable and supportable systems. Generally reliability growth is the result of an iterative design, build, test, analyze, and fix process. However, officials in MDA’s Business Management Office told us that because they have limited experience with the systems being fielded, they cannot estimate how often parts will break or how much repairs will cost. Additionally, as noted in table 3, MDA plans to add assets to its limited capability during this time frame, and as the quantity of assets increases, the cost of logistics support can be expected to grow. By 2007, MDA hopes to better understand the cost of logistics support. To gain this understanding, MDA has directed the contractors to collect and report reliability data, including data on the frequency of breakdowns and the cost of repairs. In fiscal year 2005, MDA and the military services are sharing the additional cost that the military services are incurring because BMDS elements or components and the personnel who work with them have been placed on military bases. Generally, a tenant on a military base is expected to reimburse its host (the military service whose base the tenant is occupying) for additional base support costs incurred because the tenant is in residence. For example, the tenant is expected to reimburse the host for the additional cost of communications services, lodging, and utilities. However, DOD’s Program Decision Memorandum III directed the Army and Air Force to assume some installation costs related to missile defense. The Memorandum directed the Army to provide funds for Fort Greely installation costs and training, and the Air Force to fund additional security forces and infrastructure at Vandenberg Air Force Base. To address the DOD memorandum’s directions, the Army is supporting soldiers stationed at Fort Greely to operate deployed missile defense assets. This support includes providing mail services, health and food services, and chaplain services. The Army budgeted $42 million in fiscal year 2005 for these purposes and estimates that it will need about $402.7 million more between fiscal years 2006 and 2011. According to an official in the Air Force Missile Warning and Defense Office, the Air Force included some funds in its fiscal year 2006 budget to procure and install detection devices at Vandenberg Air Force Base as directed by the memorandum. The official said funds were also included in the budgets for the following fiscal years (2007-2011) to sustain the devices. However, the official told us that a new cost estimate shows that it is likely to cost more to procure and install the devices than first estimated. Without the detection devices, Air Force officials estimate that additional security personnel will be needed, but funds for these personnel are not included in the Air Force’s budget. Because the Air Force has not added all security forces needed, the security at Vandenberg is not at the level directed by U.S. Strategic Command. Additionally, because the Air Force had no funds set aside in fiscal year 2005 for missile defense active duty security personnel, the Air Force is mostly relying upon Air Reserve volunteers to provide some additional security for missile defense assets located at Vandenberg and Schriever Air Force Bases. MDA is paying for other installation services and support costs that the DOD memorandum did not direct the military services to fund. Agreements have been finalized with the Army for installation services and support at Fort Greely and with the Air Force for services and support at Vandenberg and Schriever Air Force Bases and Eareckson Air Station. Table 4 exhibits the costs MDA has agreed to pay at each of the bases in fiscal year 2005. The 2003 Program Decision Memorandum directed the military services, combatant commands, and MDA to continue to refine fiscal years 2006- 2011 missile defense operation and support requirements and costs. The memorandum also directed MDA and the military services to budget for those costs, but it did not clarify which costs would be assumed by each organization. An official in MDA’s transition office told us that MDA included funds in its 2006-2011 budgets for costs similar to those paid in fiscal year 2005. However, the official pointed out that the Military Service Deputies for Operations are examining whether MDA should pay any operations and sustainment costs, other than contractor logistics costs, after fiscal year 2005. Additionally, MDA proposes that the military services assume contractor logistics costs beginning in 2012. However, in February 2005, the Deputies for Operations from the three military services involved met to develop a coordinated position on the services’ roles and missions for missile defense. The Deputies concluded that the services should not incur operation and support costs for fielded missile defense elements or components until a transition plan for those elements or components is successfully executed. We talked to acquisition officials in each of the three services involved in operating the BMDS about their services’ views on paying future operation and sustainment costs for assets that have not been transferred. Navy officials believe that ongoing transition discussions will determine which Aegis Ballistic Missile Defense components are sufficiently mature for the Navy to assume the cost of their operation and sustainment. The officials pointed out that the Navy addressed the Program Decision Memorandum III. However, it is the Navy’s position that a transfer decision should precede the Navy’s assumption of future operation and sustainment costs. The Navy expects MDA to maintain the Standard Missile-3 until it is transferred to the Navy and to procure all Aegis Ballistic Missile Defense equipment, including any support equipment, through 2011. Additionally, the officials told us that the Navy does not expect to incur any support costs for the Sea-Based X-Band radar that will support the Ground-Based Midcourse Defense element when it is fielded. Air Force officials told us that the Air Force should not incur any operation and sustainment costs after 2005 unless a decision is made to transfer an element or component to the Air Force. An official in the Air Force’s Missile Warning and Defense Office said that only MDA, which is developing and deploying the elements and components, can control or plan for operations and sustainment costs. Furthermore, the official said that transition plans can best be made after assets have been deployed, costs are known, military utility is verified, and capabilities have been evaluated. He told us that this approach would provide programming structure and cost transparency. The Army is willing to assume some costs associated with supporting the initial missile defense capability. An official in the Army’s Air and Missile Defense/Space Division told us that the Army is willing to continue to budget for the cost of operating this capability, supporting soldiers that perform a missile defense mission, and for common support equipment for fielded assets. However, the official said that the Army would not want to assume the maintenance costs of elements or major components until those assets are transferred to the Army. The official said that the Army usually maintains its own equipment and that as long as an asset is in development the Army would not have an inventory of spare parts to make repairs. Neither would it have engineers, or maintenance personnel with an equivalent level of expertise, to make the repairs. The military services are uncertain as to which missile defense assets may eventually be transferred to them and under what conditions those transfers may occur. This uncertainty makes it difficult for the services to plan the activities that are necessary to apply the requirements of DOD acquisition system regulations and to consider how to best realign their budgets to support the missile defense mission. DOD needs to establish clear and complete transfer criteria to better guide those making the difficult decisions for allocating management and funding responsibilities for missile defense assets. DOD also needs to clarify whether MDA or the services will be responsible for sustaining missile defense capabilities that have not been transferred to the services. The Secretary’s direction did not clearly spell out whether MDA or the military departments would be responsible for sustaining the early capability, and it is this cost that has become most contentious. If sustainment costs are much higher than expected and the number of assets being made available to the warfighter grows, as MDA expects, the use of research and development dollars to procure and sustain a missile defense capability will begin to affect MDA’s primary mission of developing new capabilities and enhancing existing ones. On the other hand, the military services will not want to fund the operation and sustainment of a missile defense capability if its cost cannot be accurately estimated. Nor will they want to fund the capability if they are not given the time to determine how to do so with the least impact on service missions. While the team established by MDA to develop transition plans includes working-level representatives from MDA, the military services, and the combatant commands, it will be difficult to reach full agreement as to who should pay sustainment costs for these assets because the representatives do not have the authority to make binding financial decisions for their organizations. MDA and the services may continue to disagree as to which component will bear sustainment costs for the early capability until DOD directs one or the other to do so. Because the services and MDA will begin to plan their 2008-2013 budgets in 2006, a decision as to who will fund these costs should be made in time for the budget deliberations. We recommend that the Under Secretary of Defense for Acquisition, Technology and Logistics revise the criteria for deciding when management and funding responsibility for missile defense assets should be transferred from MDA to a military service so that those criteria are clear and complete. We also recommend that the Secretary of Defense ensure that a decision is made as to which DOD organization will fund the operation and sustainment of missile defense assets that are part of the initial defensive capability but have not been transferred from MDA to a military service and direct that organization, or those organizations, to budget for those costs. In written comments on a draft of this report (see app. III), DOD agreed that the criteria for making decisions to transfer missile defense assets from MDA to the services must be clear. Our draft report had recommended that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to revise the criteria. In its comments, DOD stated that the Secretary of Defense did not need to provide additional direction to the Under Secretary. We accepted this view and, accordingly, revised the recommendation’s wording in the final report. DOD also agreed with the need to settle, as soon as possible, the issue as to which component will fund the operation and sustainment of missile defense assets that are part of the initial defensive capability. DOD said this issue would soon be resolved without the Secretary taking additional action. We continued to address our final report’s recommendation to the Secretary because if the services and MDA can not agree about which organization(s) should pay for these costs, the decision may have to be elevated to the Secretary’s level. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Senate Committee on Appropriations, Subcommittee on Defense; the House Committee on Armed Services; and the House Committee on Appropriations, Subcommittee on Defense; the Secretary of Defense; and the Director, Missile Defense Agency. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or levinr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Air Force Space Command officials are concerned that the Missile Defense Agency (MDA) is not providing funds to purchase test equipment for upgraded early-warning radars. According to the officials, without the test equipment, the operation of upgraded early-warning radars could be degraded. Air Force Space Command officials told us that a system programming agency is needed to support software and hardware changes to the Beale and Fylingdales early-warning radars once they are upgraded. A system programming agency consists of multiple strings of computers and peripherals that can emulate the unique aspects of the radar’s operating system and is used to maintain, modify, and test software and hardware changes prior to those changes being made to the operational radar. The Air Force currently has a system programming agency in place to support hardware and software development for the early-warning radar. However, neither MDA nor the Air Force has included funds in their budgets to establish a system programming agency for the upgraded Beale and Fylingdales radars. Space Command officials told us that a system programming agency is of particular importance because the upgraded early-warning radar is very dependent on commercial off-the-shelf equipment that often has a short life cycle. If a computer or radar replacement part is needed, there is no certainty that the part available will be compatible with other parts installed in the radar or its operating system. The officials said that if a replacement part operates nanoseconds faster or slower than the old part, the radar could fail or possibly generate false missile reports. An official in the Air Force’s Missile Warning and Defense Office told us that the Air Force included funds in its 2008-2011 budgets to upgrade the system programming agency so that its hardware and software would always be identical to the software and hardware in the operational radar. However, the official said that the Air Force believed that MDA planned to pay for the system programming agency’s development cost and that the funds budgeted by the Air Force are not sufficient to both create and sustain an upgraded early-warning radar system programming agency. Space Command officials told us that the system programming agency could cost as much as $88 million. Without the system programming agency, the officials said changes will be made directly to the operational radar, decreasing its operational availability and increasing operational risks. In a written response to a draft of this report, MDA officials said that MDA has not agreed to fund a system programming agency for upgraded early-warning radar as the Air Force has requested. During much of fiscal year 2005, MDA and the Air Force disagreed as to which organization should pay the additional costs being incurred at Eareckson Air Station in support of the missile defense mission. While MDA eventually agreed to pay all fiscal year 2005 costs, no agreement has been reached for subsequent fiscal years. Both MDA and the Air Force predict that costs at Eareckson will again be a contentious issue in fiscal year 2006. The Air Force maintains that Program Decision Memorandum III did not direct the Air Force to provide security forces and infrastructure for the missile defense mission at Eareckson. Therefore, the Air Force’s position is that the additional costs being incurred at Eareckson should be paid by MDA. Officials in the Air Force’s Missile Warning and Defense Office told us that Eareckson is populated entirely with contractor personnel who operate and maintain the Cobra Dane radar in its intelligence-gathering role. The Air Force maintains a small diversionary air strip at the base, but it does not have any military personnel located there. The officials said that the Air Force is the administrator for the Eareckson Air Station contract, but the intelligence community reimburses the Air Force for the station’s operations costs. The officials said that MDA should pay the costs incurred at Eareckson that are directly attributable to the missile defense mission, just as the intelligence community pays all costs attributable to its mission. Conversely, MDA maintains that omitting Eareckson from the Program Decision Memorandum was an oversight. However, an official in the Department of Defense’s (DOD) Comptroller’s Office told us that DOD always intended that MDA pay normal installation support and services cost at Eareckson. DOD recognized that Eareckson is an unusual base because the Air Force does not maintain a presence there. For the first two quarters of fiscal year 2005, MDA paid the additional costs that the Air Force incurred because missile defense contract personnel were located on the base and because the number of security personnel was increased to protect the missile defense mission. However, for the first 7 months of fiscal year 2005, MDA and the Air Force continued to disagree as to which party would pay installation support and services cost for the last two quarters of fiscal year 2005. In May 2005, MDA agreed to assume these costs. MDA transition office officials said that the issue of Eareckson support costs would be raised again in fiscal year 2006. MDA officials told us that Eareckson installation support and services cost will continue to be an issue because MDA is being asked to pay costs that are normally paid by the installation’s host and that MDA is not paying at other bases with which it has agreements. For example, the host typically provides fire protection for the base and the tenant would only pay the additional cost created by the tenant’s residency. However, at Eareckson, MDA is being asked to pay a portion of the cost that the Air Force is incurring to provide a basic fire protection capability. The officials said that they fear the Eareckson installation support and services agreement could establish a precedent that the military services could insist on following at other bases where missile defense assets are located. Should this happen, MDA officials contend that MDA would, in effect, be supplementing the military services’ operation and maintenance budget. In addition to the contact named above, Barbara Haynes, Assistant Director; David Hand; Mary Quinlan; Adam Vodraska, and Karen Sloan made key contributions to this report.
In 2002, the Department of Defense (DOD) implemented a new acquisition model to develop a Ballistic Missile Defense System (BMDS) that included all major missile defense acquisitions, some of which were being developed by the military services. The model called for the management and funding responsibility for production, operation, and sustainment of a capability to be transferred to a military service when a BMDS element or major component is technically mature and plans for production are well developed. The Missile Defense Agency (MDA) was given responsibility for developing the BMDS and recommending the transfer of management and funding responsibilities to the services. In 2004, MDA emplaced an initial missile defense capability, but DOD did not transfer management and funding responsibility for that capability. Because a formal transfer did not occur, GAO was asked to (1) identify DOD's criteria for deciding when a missile defense capability should be transferred to a service and (2) determine how DOD is managing the costs of fielding a BMDS capability. There is currently uncertainty as to which assets may eventually be transferred to each military service and under what conditions those transfers should occur. This uncertainty makes it difficult for the services to plan to address the requirements of DOD acquisition regulations and realign their budgets to support the missile defense mission. According to MDA and other DOD officials, when transfer criteria were established in 2002, the Department did not fully understand the complexity of the BMDS and how it could affect transfer decisions. For example, it has been difficult to determine whether MDA or a military service will be responsible for managing and funding some assets, such as stand-alone missile defense radars, because these assets are not integrated on service platforms or do not perform core service missions. MDA officials suggested that these components could be operated by either contractors or military personnel and MDA might fund their operation and sustainment. A team that includes representatives from the military services, the combatant commands, MDA, and other DOD offices was established early this year to address transfer issues. However, because MDA and the services have been unable to reach agreement on the transfer of some missile defense assets, a unit under the Joint Chiefs of Staff was tasked in July 2005 with recommending revisions to the existing transfer criteria. MDA budgeted $1.5 billion of its fiscal year 2005 research and development funds to acquire interceptors and radars and upgrade various BMDS components. It expects to continue to acquire and upgrade BMDS assets through 2011 and beyond. However, MDA and the services disagree as to who should pay for operating and sustaining the initial defensive capability after fiscal year 2005. Additionally, although DOD has budgeted $68.5 billion to develop, procure, operate, and sustain a missile defense capability between 2005 and 2011, it has not completely determined whether additional operation and sustainment funds will be needed, and it has not included all known operation and sustainment costs in its budget. Until DOD decides who will fund these costs, the services will likely continue to provide only the funding that they have been directed to provide. As a result, some needs--for which neither MDA nor the services have planned--will go unfunded. Additionally, if the funds budgeted for some purposes, such as logistical support for the BMDS, turn out to be insufficient, DOD will either have to take funds from other programs or spend less on missile defense.
Physicians incur a variety of expenses in operating their practices that contribute to the costs of performing procedures. These include salary costs for nurses, technicians, and administrative staff plus spending for medical equipment, medical supplies, rent, utilities, and general office equipment and supplies. Expenses vary among practices, depending on such factors as the size of a practice, mix of specialties involved, geographic location, health care needs of the patients, and types of procedures provided. A resource-based, relative-value payment system ranks procedures on a common scale, according to the resources used for each procedure. The need to estimate and rank practice expenses for thousands of medical procedures presents HCFA with several enormous challenges. Most physicians’ practices have readily available data on their costs, such as wages for receptionists and clinical staff and the costs associated with rent, electricity, and heat. However, Medicare pays physicians by procedure, such as for a skin biopsy, so HCFA needs to estimate the portion of total practice expenses associated with each procedure—data that are not readily available. The task is made more difficult because of the significant variations in practice expenses among individual physicians and across practice settings. For example, a physician in a solo practice is likely to have practice costs different from those of a physician in a group practice. The effect of both problems—the difficulty in allocating practice expenses to procedures and the variation in expenses among practices—is mitigated somewhat because Medicare’s fee schedule allowance for each procedure is based on the procedure’s ranking relative to all other procedures. Even though the actual expenses associated with a procedure cannot be precisely measured and vary among physicians’ practices, the expense of one procedure relative to another is easier to estimate and is likely to vary less across practices. The resource-based practice expense RVUs that HCFA first proposed in 1997 and then implemented in 1999 have been the subject of widespread debate among physicians’ groups. This controversy is not unexpected, since the legislative requirement that fee schedule changes be budget neutral means that some physicians’ specialty groups would be likely to benefit from the changes at the expense of other groups. In other words, total Medicare practice expense payments to physicians will not change, but payments for particular procedures, and consequently for certain specialties, could change. To moderate the effects of the expected redistributions, the BBA required that the new RVUs be phased in over a 3-year period. In 1999, the RVUs used to determine Medicare’s practice expense fee schedule payments consist of 25 percent of the new resource-based RVUs and 75 percent of the charge-based RVUs. The share based on resource-based RVUs will increase to 50 percent in 2000, 75 percent in 2001, and 100 percent in 2002. Additionally, the BBA required HCFA to develop a refinement process for each year of the 3-year transition period. HCFA’s original methodology was described in a June 1997 proposed rule. An initial step was to develop estimates of the costs of the direct practice expenses associated with each procedure. HCFA convened 15 clinical practice expert panels (CPEP) organized by specialty and composed of physicians, practice administrators, and nonphysician clinicians, such as nurses. The CPEPs estimated the type and quantity of nonphysician labor, medical equipment, and medical supplies required to perform each of more than 6,000 procedures. A HCFA contractor subsequently estimated the dollar costs of these direct expenses for each procedure. HCFA applied a series of adjustments to these direct expense estimates. First, HCFA reviewed the data to ensure that the identified costs were allowable under Medicare policy and revised them as necessary. Next, HCFA used a statistical “linking” methodology that adjusted the estimates from different CPEPs to put them on a common scale and make them directly comparable. HCFA then adjusted the CPEP estimates so that the proportions of aggregate practice expense dollars devoted to nonphysician labor, medical equipment, and medical supplies across all specialties were consistent with national practice expense data that the American Medical Association (AMA) collects through its Socioeconomic Monitoring System (SMS) survey. The survey is administered annually to a random sample of physicians. Lastly, HCFA adjusted the CPEP clinical and administrative labor estimates that appeared to be unreasonable. In the final step in the methodology, HCFA developed a formula to allocate to individual procedures the indirect expenses associated with running a practice. Indirect expenses such as rent and utilities are difficult to associate with individual procedures; therefore, the CPEPs did not estimate these expenses for each procedure. Instead, HCFA allocated indirect expenses to procedures based on the physician work, direct practice expense, and malpractice expense RVUs associated with a procedure. Thus, procedures that ranked high in each of these three categories were assigned proportionately more indirect expenses. Additional details of HCFA’s original proposal are contained in appendix II as well as in our February 27, 1998, report. HCFA’s new methodology was contained in its June 1998 proposed rule and revised slightly in its November 1998 final rule. For each medical specialty, HCFA estimated the aggregate spending for categories of direct and indirect practice expenses for treating Medicare patients, using the SMS survey data and Medicare claims data. Then, using the specialty’s CPEP estimates, HCFA allocated each of the direct expense totals for clinical labor, medical equipment, and medical supplies to individual procedures. To allocate the indirect costs to procedures, HCFA used a combination of a procedure’s physician work RVUs and direct practice expense estimates for clinical labor, medical equipment, and medical supplies. For procedures performed by multiple specialties, HCFA computed a weighted average of the allocated expenses based on the frequency with which each specialty performed the procedure on Medicare patients. This step was necessary because HCFA’s new approach created separate practice expense estimates by specialty for procedures performed by more than one specialty. However, Medicare pays the same amount for a procedure to all physicians, regardless of specialty. See appendix II for a more detailed description of HCFA’s revised methodology. HCFA’s new methodology is an acceptable approach for revising Medicare’s practice expense payments. The new methodology has much in common with HCFA’s original methodology. For example, both approaches use the SMS data to establish aggregate practice expense spending estimates, or cost pools, for different types of costs, and both approaches use the CPEP data to identify the specific resources associated with individual procedures and to allocate costs to them. Further, the new methodology explicitly recognizes differences in practice expenses among specialties. Although several physicians’ groups have criticized the new methodology for not being resource-based, their view is not shared by others. HCFA’s revised methodology uses what are generally recognized as the best available data for creating resource-based practice expense values—the SMS annual survey data and the CPEP data. The annual SMS survey data are responses from a randomly selected, nationwide sample of several thousand physicians. Although other practice expense surveys are conducted by different organizations, they are not nationally representative and thus are inappropriate for developing resource-based practice expense values. To obtain more accurate information, a practice expense summary form is mailed to physicians in advance of the SMS survey so that physicians are better prepared to answer the practice expense questions. The CPEP data are the only data available that identify the specific resources used to deliver individual procedures. HCFA’s new and original methodologies used these two data sources for similar purposes. Both used the CPEP data to identify the specific resources associated with individual procedures. Further, both methodologies used the SMS data to determine the distribution of total practice expense dollars among different types of costs. However, there were some key differences, in particular the recognition of differences among specialties, in how the two methodologies used the data. Under the original method, HCFA used the SMS data to create an aggregate cost pool for each type of direct expense. Under the new method, HCFA created a separate pool for each type of direct and indirect expense for each medical specialty. There are several other significant differences between the two methodologies. By creating separate practice expense cost pools for each specialty that are based on the SMS data, HCFA’s revised methodology explicitly maintains relative differences among specialties in their total practice expenses for labor, equipment, supplies, and other expenses. For example, SMS data indicate that ophthalmologists’ practice expenses are $132 per hour while those of general surgeons are $54 per hour. These figures include $9 per hour in equipment expenses for ophthalmologists and $2 per hour for general surgeons. HCFA’s earlier methodology included certain adjustments that would not have maintained such differences. HCFA’s revised methodology is also more straightforward and easier to understand than HCFA’s first proposal, a belief shared by many of the physicians’ groups we contacted. For example, in its original methodology, HCFA used a complex statistical model to adjust the CPEP estimates, an adjustment we criticized in our earlier report because it contained technical weaknesses that may have biased the estimates. The new methodology no longer contains this adjustment and eliminates other controversial steps in HCFA’s first proposal that we criticized. Further, the new method treats administrative labor as an indirect expense; this is consistent with our February 1998 recommendation that HCFA consider reclassifying administrative labor from a direct to an indirect expense. The American Academy of Family Physicians, the American College of Physicians-American Society of Internal Medicine, and the American Society of Clinical Oncology believe that HCFA’s revised approach for establishing practice expense RVUs is not resource-based. They note that specialties whose procedures may have been overvalued under the charge-based system will continue to benefit under the new methodology. Such specialties, they believe, have had greater revenues and therefore have had more money to spend on their practices. They believe, consequently, that specialties that perform overvalued procedures are likely to have incurred some unnecessary costs and to have inflated cost pools reflected in the SMS data, while other specialties will be disadvantaged as their relative costs will be underestimated. They also note that HCFA’s final rule says that HCFA believes that this issue of historical differences in payment should be discussed during the refinement period. These physicians’ groups believe that HCFA should use its original method because it resulted in relative values similar to those previously estimated by the Physician Payment Review Commission and others. Compared with its original method, the RVUs developed under HCFA’s current method would result in smaller redistributions among specialties. For example, HCFA estimates that practice expense payments to general practitioners under its original methodology would be 7 percent greater over a 4-year period than under the prior charge-based methodology, while such payments would be only 4 percent greater under its revised methodology. Payments to cardiac surgeons would be reduced by 30 percent under the original methodology or more than twice the 12-percent reduction under the revised methodology. Of the $18 billion Medicare spent on practice expense payments in fiscal year 1997, $2 billion would have been distributed differently across specialties if the original approach had been in effect—$500 million more than under the new methodology. Some economists and physicians’ groups, however, note that physicians work in a competitive environment that is subject to market pressures, such as managed care contracting, and contend that physicians seek to maximize their income by minimizing costs. This argument would lead to the conclusion that if Medicare has historically overpaid some specialties, the overpayments would be reflected in higher net incomes for those specialties rather than higher expenses. While neither position can be conclusively verified, we believe that the use of incurred costs, as reported on the SMS survey, is consistent with traditional cost accounting practices. Traditional cost accounting does not normally involve determining the efficiency of the costs to produce a service. Making such a determination with accuracy would be very difficult. Even though HCFA used the best available data and developed a generally acceptable methodology for establishing practice expense RVUs, specific questions about both the data and methodology need to be reviewed and addressed, a position supported by virtually all the physicians’ groups we contacted. The data contain certain weaknesses such as small sample sizes. The methodology includes some assumptions and adjustments that have not been validated. Many of these issues can be addressed during the 3-year implementation period and will result in modifications to the final RVUs in 2002; others will require efforts by HCFA over a longer term. Readily available alternatives to the SMS and CPEP data do not exist. The SMS survey provides nationally representative data on practice expenses, while the CPEP data are the only data available on practice expenses that identify the specific resources associated with individual procedures. Nevertheless, limitations with both data sources for creating resource-based practice expense RVUs need to be overcome. As described below, workable options are available for many of these issues. The AMA, many physicians’ groups, and the Medicare Payment Advisory Commission (MedPAC) identified three basic limitations with the SMS data. First, response rates to the practice expense questions on the SMS survey tend to be low—about 40 percent—compared with the overall survey response rate of about 60 percent. This reduces the sample sizes and can bias the data if the expenses of physicians who failed to respond to the survey are not comparable to the expenses of those who did. Second, the sample sizes for some specialties either are too small to permit separate calculations of practice expense cost pools or result in relatively imprecise estimates. Third, the SMS data represent a physician’s portion of a group’s practice expenses. Because HCFA’s methodology is based on calculating practice expenses per hour for each physician respondent’s practice, HCFA had to make a number of assumptions about the data. For example, HCFA assumed that all physician owners in a group practice had the same practice expenses as the physician respondent. To the extent that these assumptions are not true, the practice expense cost pools are inaccurate. This assumption may be particularly problematic for multispecialty practices in which physicians within the same practice but from different specialties may have different practice expenses. Some of these limitations with the SMS data can be addressed during the 3-year phase-in period. To determine whether the SMS data are subject to nonresponse bias, for example, HCFA could (1) compare the characteristics of respondents and nonrespondents to the SMS survey or (2) compare the characteristics of respondents to a comparable external data source.HCFA could then evaluate the need for corrections. HCFA has not yet conducted analyses to determine if nonresponse bias is an issue with the SMS survey, but its new rule indicates the agency’s willingness to review and refine the data. Increasing the SMS sample and redesigning some of the questions would help address other known limitations but would most likely not result in improvements during the phase-in period. The limitations associated with small sample sizes can be addressed in future SMS surveys of physicians’ practice expenses. In fact, HCFA identified working with the AMA to improve the SMS survey as one of its most important tasks during the 3-year phase-in period. In future SMS surveys, for example, more physicians could be contacted, thereby providing HCFA with larger sample sizes for developing specialties’ practice expense cost pools. This approach, however, would involve decisions as to how many additional physician responses are needed and who would pay for the additional survey costs. It is not clear whether HCFA will use the results from future SMS surveys to refine and adjust the practice expense RVUs. HCFA officials expressed skepticism about doing so because they fear that physicians might inappropriately inflate their reported practice expenses. This could result in some specialties’ increasing their practice expense cost pools, with proportional reductions in cost pools for other specialties since all adjustments must be budget neutral. However, there are ways to test for such bias. For example, AMA representatives told us that comparisons with earlier years’ responses could indicate areas for further review where physicians might be trying to manipulate their responses. In its final rule, HCFA suggested that future SMS survey data for a specialty that showed significant changes from earlier surveys be selectively audited. However, AMA representatives were concerned that auditing future SMS results might discourage physician participation in the survey; they suggested that less formal types of validation might be more productive, such as conducting follow-up telephone calls with physicians to explore their answers and to ensure that they understood the questions. Rather than collecting practice expense data about individual physicians, which prompted HCFA to make certain assumptions about the data, future surveys could capture practice expenses about all physicians in a practice. The AMA plans to develop a new survey instrument for this purpose. AMA representatives said that they may pilot-test this survey in 2000 and alternate it with a survey of individual physicians every other year. Results from the survey of all physicians in a practice would likely not be available to HCFA until after the 3-year phase-in period ends. HCFA used the CPEP data to allocate the practice expense cost pools to individual procedures because the CPEP data are the only data that allow this. Some physicians’ groups, however, have criticized these data as representing merely the “best guesses” of physicians and other panel members. They have also criticized the CPEPs for (1) not being representative of the different practice settings or types of physicians who provide particular procedures and (2) using different assumptions and definitions, leading to differences in the resources identified by different panels for the same procedures. As we noted in our February 1998 report on HCFA’s first proposal, the use of expert panels is an acceptable method of developing procedure-specific practice expense data. We explored other primary data gathering methods and concluded that each has practical limitations. However, we reported that it is important for HCFA to refine and validate these data. We noted that collecting actual data on key procedures from a limited number of physicians’ practices through surveys or on-site reviews during the 3-year phase-in period would enable HCFA to assess the CPEP data and identify needed refinements. HCFA’s revised methodology includes certain assumptions and adjustments that were prompted by limitations in the available data relative to the difficult task of estimating and ranking practice expenses for thousands of medical procedures. Such assumptions and adjustments should be reasonable and supported by data as much as possible. In some cases HCFA has taken steps to review the reasonableness of different assumptions and adjustments but in other cases it has not. Several examples are presented below to illustrate the kinds of assumptions and adjustments HCFA will need to review during the 3-year phase-in period; others are discussed in appendix III. Because Medicare pays separately for chemotherapy drugs provided by oncologists, HCFA adjusted their medical supply cost pool to prevent duplicate Medicare payments. Oncologists reported medical supply costs of $87 per hour in the SMS survey, compared with an average of $7 for all physicians. Since the SMS supply data include drug costs, HCFA officials believed that the $87 per hour figure includes the cost of chemotherapy drugs paid separately by Medicare. HCFA therefore used the average for all specialties in computing the oncologists’ medical supply cost pool to avoid duplicate payments for these drugs. Oncologists acknowledged that the costs of chemotherapy drugs are included in the SMS survey but argued that HCFA’s adjustment was too large because oncologists incur higher supply costs than the average physician. In this case, HCFA has conducted a limited analysis to determine the reasonableness of its adjustment to the SMS data. First, HCFA calculated the oncology supply cost pool based on the $87 supply cost per hour. HCFA then compared that cost pool with the payments Medicare made to oncologists for drug reimbursement. HCFA found that the drug reimbursement significantly exceeded the supply costs that oncologists reported on the SMS. Although this analysis did not determine what portion of the $87 is attributable to drug costs, it does indicate that HCFA’s adjustment is a reasonable starting point. However, more data are needed to determine the appropriate adjustment. During the phase-in period, HCFA plans to conduct a more complete analysis of oncologists’ actual drug and supply costs. HCFA made other adjustments or assumptions for which it has yet to gather supporting data. For example, to estimate the practice expenses per hour for specialties not included in the SMS survey, HCFA used the SMS data from proxy specialties. Since the SMS survey does not separately identify hand surgeons, HCFA assumed that their practice expenses are the same as those of orthopedic surgeons, whose SMS data HCFA used in determining the practice expense cost pools for hand surgeons. Whether hand surgeons and orthopedic surgeons have similar practice expenses is not known. Expected Medicare payments for some specialties not included on the SMS survey differ greatly between HCFA’s two proposals but it is not known which method produces the better estimates. For example, in its revised methodology HCFA used the practice expenses of general internists as a proxy for calculating the practice expenses for chiropractors. On the basis of HCFA’s estimates, chiropractors could expect an 8-percent reduction in their Medicare payments under HCFA’s final rule whereas they expected a 14-percent increase under HCFA’s first proposed rule. Such discrepancies may indicate a problem in using some specialties as proxies for others. Additional review and analysis could help validate HCFA’s practice expense per hour assumptions for specialties not included on the SMS survey. HCFA noted in its final rule that it will work with all specialties not represented in the SMS survey to ensure that appropriate data are used to calculate their practice expense RVUs. Other HCFA assumptions and adjustments warrant reexamination. For example, HCFA used physician work RVUs in allocating indirect expenses to procedures—a method supported by MedPAC staff and some physician groups. However, physician work RVUs reflect not only the level of skill physicians require to deliver a procedure but also their stress from risking harm to their patients—measures not generally associated with practice expenses. The time a physician requires to perform a procedure may be a better measure of the indirect expenses associated with that procedure. For example, utility expenses should not differ between two office-based procedures that require the same amount of a physician’s time but have different stress levels. In its final rule, HCFA acknowledged that using the physician work RVUs as an indirect expense allocator has shortcomings. It is important that HCFA develop a plan for ensuring that the most critical issues associated with the new methodology and data are addressed first. HCFA should base its decisions about which issues to address first on sensitivity analyses that would allow it to evaluate the effects of various adjustments to the methodology and data and focus on those that have the greatest effect on the new practice expense RVUs. Using resources to examine fully those that have very limited effects may be inefficient. HCFA has done little in the way of conducting such analyses and therefore does not know where to most effectively target its refinement efforts. Another issue of particular importance concerns whether HCFA will use supplemental practice expense data provided by individual medical specialties to revise the practice expense cost pools. Physicians’ groups believe that there may be circumstances in which alternative data are more representative and accurate than the SMS data and therefore should be used to supplement the SMS data. The Society of Thoracic Surgeons, for example, recently submitted additional practice expense data to HCFA that are based on surveying an additional number of thoracic surgeons during the 1998 SMS survey than would normally be contacted. The Society believes that HCFA should use these new data, along with the prior SMS data, to recalculate thoracic surgeons’ cost pool. HCFA officials told us that they will be cautious about using alternative data sources because of their potential bias. Alternative data also may not be compatible with the SMS data, as HCFA found with data recently submitted by some specialties. HCFA officials said that they would be willing to base their refinement of a specialty’s practice expense cost pool on alternative data if there is compelling evidence that the SMS data are inaccurate or not representative. It may be most appropriate, for example, to use additional or alternative data for specialties with small SMS sample sizes or for specialties whose cost pools were based on practice expenses of other specialties. In deciding whether to use data from other sources to augment the SMS data, HCFA will need to carefully review the data. HCFA must be assured that the data are reasonable and compatible, are collected from a representative sample of physicians who work in various settings, and are not biased. One way to help ensure data compatibility is to use a common survey instrument and methodology to collect the data. Further, specialties that do not conduct their own studies could be disadvantaged by studies that result in redistributing Medicare funds from one specialty to another. Consequently, HCFA officials said that before accepting data from other sources they (1) would like to have the data selectively audited by an independent entity and (2) need to establish a process allowing specialty societies to comment on proposed changes to their practice expense cost pools resulting from using the new data. Refinement of the CPEP data is another area where HCFA may be assisted by outside resources during the phase-in period. HCFA twice attempted to refine these data by convening panels of physicians but neither attempt succeeded. Given this experience, HCFA is considering other options, such as using AMA’s Specialty Society Relative Value Scale Update Committee (RUC) to refine the CPEP data. The RUC is a panel of physicians representing multiple specialties and is experienced in reaching consensus on difficult physician payment issues affecting many different specialties. To help HCFA refine the CPEP data, the RUC has decided to form a Practice Expense Advisory Committee that will review comments on code-specific CPEP data received by HCFA. The advisory committee will consist of both physicians and nonphysicians, such as nurses and practice administrators. As currently conceived, the advisory committee will submit its recommendations to the RUC for review and the RUC will make final recommendations to HCFA. Further, plans call for the advisory committee to develop recommended CPEP-like data on the estimated resources for codes that were established between 1996 and 1998 and those that will be established in 1999. HCFA does not have CPEP data for these codes because they were not in use when the CPEPs met. In its final rule, HCFA stated that it may use contractors to provide it with advice on how to deal with the many technical and methodological refinement issues it faces during the refinement period. HCFA still needs to define the process and organizational structure it will use to seek this advice. MedPAC staff emphasized that HCFA needs to create clearly defined, step-by-step refinement processes that involve public comment and review. This should result in a coordinated, defined effort, they said. HCFA also needs a plan for making ongoing updates to the RVUs; new codes are added to the fee schedule each year, and these codes must be assigned practice expense RVUs. Further, the RVUs need to be revised to reflect changes in how procedures are delivered and changes in practice patterns. Finally, it is essential that HCFA continue monitoring indicators of beneficiaries’ access to physicians’ care to determine whether access is compromised by changes to Medicare’s physician fee schedule payments. Virtually all the physicians’ groups we met with support HCFA’s use of the RUC to address ongoing updates to the practice expense RVUs. HCFA has not yet decided upon a permanent process for assigning practice expense RVUs to new procedures or revising the RVUs for existing procedures, but its final rule mentions the potential for the RUC to be involved in these issues in the future. The RUC has been proactive on this topic and has proposed to HCFA that it develop practice expense RVUs for new and revised procedures implemented in 2000 and beyond. The RUC said that it would seek input from nurses, practice managers, and others who have expertise in physicians’ practice expenses. Physicians’ group representatives and HCFA officials believe that it is important to have these other experts involved in developing the practice expense RVUs because they may be more knowledgeable about practice expense than physicians. A periodic, comprehensive review and update process is needed because the Medicare statute requires the Secretary of HHS to review the relative values for all physician fee schedule procedures at least once every 5 years. Since the practice expense RVUs become final in 2002, HCFA will need to review them before 2007. Even though HCFA has said that it is hesitant about using future SMS surveys to refine the practice expense RVUs during the phase-in period and has no plans to use AMA’s survey of practices’ total expenses, it may wish to use such data in the periodic 5-year review. The RVUs must reflect the ongoing technological changes in medicine, as well as the changes in how physicians practice; future surveys would provide HCFA with this necessary information. Additionally, HCFA may need to recalculate the costs of equipment and supplies associated with procedures using new cost data. Finally, it is important for HCFA to continue monitoring beneficiaries’ access to care, given the changes in what Medicare pays physicians. Since Medicare began paying physicians on the basis of a national fee schedule, HCFA has monitored indicators of beneficiaries’ access for adverse consequences. For example, HCFA surveys beneficiaries annually and modified its 1998 survey to further clarify access problems beneficiaries may have been experiencing. Based on these analyses, beneficiaries’ access to care has remained good since the fee schedule’s implementation. However, some medical specialties whose Medicare payments were reduced as other components of the fee schedule were implemented could experience further reductions under HCFA’s proposed changes in the practice expense RVUs. For example, between 1992 and 1996, cardiologists, gastroenterologists, and pathologists experienced Medicare payment reductions of 9, 8, and 9 percent, respectively. Under the new practice expense payments, these specialties face additional expected payment reductions of 9, 15, and 13 percent, respectively. Such cumulative payment reductions could affect physicians’ willingness to care for Medicare beneficiaries. Non-Medicare patients too could experience changes in their access to physicians’ services resulting from changes in Medicare’s payments; many private payors and Medicaid programs base their payments to physicians on Medicare’s fee schedule. It is important, therefore, to continue to monitor beneficiaries’ access to physicians’ services, paying particular attention to the specialties that are most adversely affected by changes in the fee schedule. Recognizing this, HCFA told us that the next HHS report to the Congress addressing changes in access to care will examine, to the extent possible, access indicators for the procedures with the greatest cumulative reductions in Medicare fees. The Medicare physician fee schedule replaced a payment system that was criticized for providing more generous payments for some services than others relative to the actual resources needed to provide them and, as a result, for promoting an inappropriate allocation of medical services. The new system, based on resource-based RVUs, is intended to ensure appropriate payment for physicians’ services relative to one another, based on the resources needed to provide the services. However, this payment model has not been easy to implement. Estimating and ranking practice expenses for thousands of medical procedures is inherently difficult and imprecise. HCFA’s new methodology represents a reasonable starting point for creating resource-based practice expense RVUs. It uses the best available data for this purpose and explicitly recognizes specialty differences in practice expenses. It also eliminates certain adjustments to the CPEP estimates that we questioned in HCFA’s original methodology. In either methodology, HCFA is faced with using less than perfect data that need to be refined over the phase-in period. Although the SMS and CPEP data provide a solid foundation for creating resource-based practice expense RVUs, both have their limitations. The new practice expense RVUs should be based on the most accurate and reliable data possible. It is, therefore, important for HCFA to use options that improve these data. It is also important for HCFA to collect and analyze additional data that would enable it to validate or, where necessary, alter the assumptions and adjustments underlying its revised methodology. Additionally, during the phase-in period, HCFA has the opportunity to review and possibly revise some of its policy-related assumptions and adjustments, such as using physician time rather than physician work RVUs, in its indirect expense allocation calculations. It is important that HCFA make effective use of its resources in the short term to validate and improve the practice expense RVUs. HCFA does not yet have a plan for identifying the issues that have the greatest effect on the new RVUs. Sensitivity analyses would provide HCFA with this critical information so that it can decide where to target its corrective actions most effectively. In addition, for the longer term, HCFA needs to specify processes for updating the practice expense RVUs. Processes are needed for assigning practice expense RVUs to new procedures, revising the RVUs to reflect changes in how current procedures are performed, and providing for a review of the resource-based practice expense RVUs at least once every 5 years. Beneficiaries’ access to care will be a key measure of physicians’ acceptance of the new practice expense payments. How physicians respond to changes in their payments is unknown, but HCFA should continue to monitor indicators of beneficiary access to care. Such monitoring is crucial to ensure that Medicare’s payments to physicians are adequate to maintain beneficiaries’ access to care. We recommend that the Administrator of HCFA Use sensitivity analysis to identify issues with the methodology that have the greatest effect on the new practice expense RVUs and to target additional data collection and analysis efforts. One clear example of where HCFA should evaluate different policy options for revising the methodology is in the use of physician time, instead of physician work, to allocate indirect expenses. Develop plans for updating the practice expense RVUs that address how to (1) assign practice expense RVUs to new codes, (2) revise the RVUs for existing codes, and (3) meet the legislative requirement for a comprehensive 5-year review of the resource-based practice expense RVUs. Monitor indicators of beneficiaries’ access to care, focusing on procedures with the greatest cumulative reductions in Medicare payments, and consider access problems when evaluating the physicians’ payment system. We provided HCFA with a draft of this report and received written comments in response. We also gave copies of the draft to representatives of physicians’ groups, a medical group we contacted during our work, and MedPAC; they provided us with oral comments. The following summarizes the comments and our responses. HCFA concurred with each recommendation and said that it was pleased that we found HCFA’s revised methodology for creating resource-based practice expense values to be a reasonable starting point. HCFA agreed that it needs to set priorities and target its refinement efforts on issues having the greatest effect but did not say how it would select its targets for refinement. We believe that a systematic approach to establishing refinement priorities, such as would be afforded through sensitivity analysis, would be an effective tool for evaluating refinement options. In its comments, HCFA said that it has plans to obtain contractor support and other independent advice on the broad methodological issues it faces. Further, HCFA noted that the Secretary of HHS is required by legislation to monitor and report annually to the Congress on a number of health care issues, including access to care. HCFA said that the next HHS report will, to the extent possible, examine access to care indicators for procedures with the greatest cumulative reduction in Medicare fees. We included these points in our report. HCFA also provided us with technical comments, which we incorporated where appropriate. HCFA’s comments appear in appendix V. Regarding HCFA’s revised approach for developing resource-based practice expense payments, representatives from the Practice Expense Coalition said that they were pleased that we support HCFA’s revisions. They believe that the new methodology more effectively recognizes differences in practice expenses among physician specialties. Representatives from several other physicians’ groups, including the American College of Physicians-American Society of Internal Medicine and the American Academy of Family Physicians, however, said that the new methodology is not resource-based in that it reflects some unnecessary expenses that have resulted from historical differences in practice expense payments. MedPAC staff too said that there may be historical payment bias in the data. We revised our report to better reflect these concerns and now note that HCFA will accept comments on the issue of historical payment differences during the 3-year refinement period. We continue to believe, however, that HCFA’s new methodology is resource-based; it uses the best available data to rank procedures on a common scale according to the resources used. Further, trying to determine and measure the extent to which certain procedures may have been overvalued would be very difficult; doing so would also be inconsistent with traditional cost accounting practices that do not measure the efficiency with which costs are incurred in providing a service. Representatives from MedPAC, AMA, and many other physicians’ groups further asserted that we understated the differences between HCFA’s original and revised methodologies. We clarified the report by adding more information about how the two methodologies differ. Representatives from the Practice Expense Coalition said that we understated HCFA’s refinement workload by not discussing all the refinement issues HCFA discusses in its final rule. We believe that our report focuses on the major refinement issues HCFA faces in the coming 3 years. While we recognize that the report does not cover all refinement issues, we do not believe that this is necessary. We use certain issues to illustrate the types of refinement tasks facing HCFA and the need for HCFA to develop processes for addressing these issues. Additionally, certain refinement issues that some suggested we include in our report, such as the base year to be used for calculating the new practice expense RVUs, the behavioral offset, and site of service differentials, were outside the scope of our work. AMA and American College of Physicians-American Society of Internal Medicine representatives suggested that we more clearly explain the benefits and limitations we identified with the CPEP data in our first report on physician practice expense payments. We have added some material from our earlier report in response to this suggestion. Society of Thoracic Surgeons and AMA representatives agreed with us that it is very important for HCFA to decide what, if any, data HCFA will accept from medical societies to revise or supplement the SMS data. Representatives from the American College of Physicians-American Society of Internal Medicine suggested that the RUC develop standards for medical societies to follow when conducting future practice expense surveys. They believe that the RUC is the appropriate body to serve this role and that the RUC can critically analyze survey results as it now does for development and review of the physician work RVUs. As we note in our report, it is important for HCFA to be assured that any data it uses to augment the SMS data be reasonable, compatible, and otherwise not biased. Representatives from MedPAC, AMA, and two other physicians’ groups questioned our recommendation that HCFA evaluate using physician time, instead of physician work RVUs, for allocating indirect expenses to procedures. MedPAC staff support using physician work RVUs because they believe that indirect costs should be distributed in proportion to all inputs to a procedure—physician time as well as the inputs of nonphysician staff plus the equipment and supplies used. Representatives from MedPAC, AMA, and several physicians’ groups said that they are concerned about the accuracy and reliability of the physician time data. Further, representatives said that physicians have a better understanding of, and greater confidence in, the physician work RVUs. We continue to believe that HCFA should evaluate using physician time as an indirect cost allocator. As explained earlier in the report, physician work RVUs include measures not generally associated with practice expenses, such as the stress on the physician to perform a procedure. Conversely, indirect expenses, such as utility costs and rent, will vary depending upon the amount of physician time associated with a procedure. Moreover, physician time is used in calculating procedures’ physician work RVUs. Representatives from the American College of Physicians-American Society of Internal Medicine and the American Academy of Family Physicians suggested that we expand our recommendation on monitoring beneficiaries’ access to care to include monitoring increases in beneficiaries’ use of services. We did not modify our recommendation because we believe that HCFA’s current research on beneficiary access already includes several components that would indicate increases in access. An AMA representative said that our discussion of beneficiary access to care should note that the effects of the Medicare fee schedule go beyond Medicare since many private payers and Medicaid programs set their fees on Medicare’s payments. We noted this in the report. The physicians’ groups differed on whether HCFA should include the costs of staff who accompany physicians to the hospital when calculating the practice expense RVUs. Representatives from the American College of Physicians-American Society of Internal Medicine and the American Academy of Family Physicians believe that these costs should be excluded and noted that we agreed in our first report that HCFA appropriately excluded these costs from the CPEP data since Medicare pays for these expenses through other mechanisms. Representatives from the Practice Expense Coalition and American College of Surgeons said, however, that they do not believe that these costs represent double payment by Medicare and that these costs therefore should be included in HCFA’s calculations. We believe that taking the cost of these staff out of the CPEP estimates was appropriate under HCFA’s original methodology to avoid double payments by Medicare for these costs. Also, these costs were separately identifiable. Under HCFA’s revised methodology, avoiding double payments for these costs would require taking them out of the SMS data, which would be difficult since these costs are not separately identified. Therefore, as we state in the report, we believe that the most appropriate initial step is for HCFA to conduct sensitivity analysis to determine if including these costs significantly affects the RVUs. As agreed with your offices, we are sending copies of this report to the Secretary of HHS, the Administrator of HCFA, interested congressional committees, physicians’ organizations, and others who are interested. We will also make copies available to others upon request. This report was prepared by Robert Dee, Patricia Spellman, and Michelle St. Pierre. Please call me at (202) 512-7114 or William Reis, Assistant Director, at (617) 565-7488 if you have any questions. Efforts to reform Medicare’s payments to physicians began in the 1980s and were prompted by concerns about increasing program costs and flaws in the existing methods for reimbursing physicians. Medicare’s spending for physicians’ expenses per beneficiary had been growing at almost twice the rate of the gross national product. At the time, Medicare reimbursed physicians through the “customary, prevailing, and reasonable charge” system, but this payment system was criticized because it resulted in widely varying payments for the same service and contributed to inflation in Medicare’s expenditures. Concern was also raised that the payment levels favored surgical services at the expense of primary care services, resulting in distorted financial incentives. Limits on actual charges and a series of freezes and reductions in payment levels for particular services made the system increasingly complex. The Consolidated Omnibus Budget Reconciliation Act of 1985 required the Secretary of the Department of Health and Human Services (HHS) to study and report to the Congress on a resource-based relative value scale system for reimbursing physicians for their services. Such a system ranks services on a common scale according to the resources used in providing them. Payment for a service depends upon its ranking; services with a high ranking receive greater payment than those with a low ranking. In its 1989 report to the Congress, the Physician Payment Review Commission (PPRC) recommended that a resource-based, relative-value scale system be adopted. The Omnibus Budget Reconciliation Act of 1989 mandated that Medicare implement an approach based on relative value that accounted for three components of costs—physician work, practice expense, and malpractice expense. The system was to be phased in over 5 years beginning in 1992. Implementation was to be budget neutral, meaning that aggregate payments could not be higher than they would have been if the payment system had not changed. The legislation also required the adjustment of each component of the fee schedule to reflect geographic differences in costs, the elimination of specialty-specific payment differentials for providing the same procedure, the implementation of a process for calculating the annual payment update, and the establishment of volume performance standards to track changes in the volume or intensity of procedures Medicare pays for. Health Care Financing Administration (HCFA) contractors at the Harvard School of Public Health had developed a resource-based physician work component for the new system, but methods for calculating resource-based relative values for practice and malpractice expenses had not been developed at that time. Each procedure included on Medicare’s physician fee schedule is assigned a relative value that is the sum of the relative value units (RVU) for the three cost components—physician work, practice expense, and malpractice expense. The RVUs reflect the resources used to provide that procedure relative to other procedures. In other words, a procedure with more RVUs uses more resources than a procedure with fewer RVUs. The RVUs are converted to a dollar payment using a monetary conversion factor. The product of the RVUs and the conversion factor is the Medicare physician fee schedule payment. Before the Balanced Budget Act of 1997 (BBA), there were three different conversion factors—one for surgical services, one for primary care services, and one for other services. The BBA created a single conversion factor for all services starting in 1998. Before the BBA, the conversion factors were updated annually on the basis of expected increases in physicians’ incomes and the costs of operating a medical practice. The update for each conversion factor was itself adjusted on the basis of a comparison of the actual growth in Medicare’s expenditures with expected growth as estimated by the Medicare Volume Performance Standard (MVPS). The MVPS target was based on such factors as the projected growth in Medicare payments and the enrollment and aging of Medicare patients, and it was used to restrain growth for spending on physicians’ procedures. In other words, if Medicare’s expenditures grew more quickly than expected, the next year’s updates for the conversion factors were reduced accordingly. The BBA required a new method to adjust the conversion factor update beginning in 1999, when the MVPS was replaced with a cumulative sustainable growth rate based on the growth of the real gross domestic product. The cumulative sustainable growth rate (SGR) operates in a similar manner as the MVPS and is used to restrain growth for spending in physicians’ procedures. The SGR is based on the estimated growth in payments for all physicians’ services, beneficiaries enrolled in the Medicare fee-for-service program, real gross domestic product per capita, and expenditures for all physicians’ services that result from changes in statutes and regulations. The fee schedule payments also reflect geographic variation in input prices because the physician work, practice expense, and malpractice expense RVUs are each adjusted by a geographic practice cost index (GPCI). Each of the GPCIs—the cost-of-living, practice expense, and malpractice GPCI—measures the prices of relevant inputs physicians face in a geographic area relative to national average prices. The development of resource-based RVUs for the physician work component of the fee schedule began in the 1980s and took about 7 years to complete. Building on preliminary studies conducted earlier in that decade, Harvard researchers undertook a complex, multiphased process with the cooperation of the American Medical Association (AMA) and the assistance of about 100 physicians organized into technical consulting groups. These groups developed vignettes to describe standard scenarios for delivering procedures listed in AMA’s Physicians’ Current Procedural Terminology (CPT). In a national survey, physicians were asked to rank procedures on the bases of four standard elements: (1) physician time, (2) mental effort and judgment, (3) technical skill and physical effort, and (4) stress stemming from the risk of harm to patients. The researchers reported a high level of consistency in how physicians in the same specialty ranked the relative work required for the services they performed. Cross-specialty panels drawn from the physicians’ consulting groups chose procedure codes that represented equivalent or similar work within different specialties. Those codes then served as the basis for a statistical process to link all the codes ranked by each specialty along a common scale. Physician work RVUs for about 800 procedure codes were developed through the survey process. RVUs for the remaining codes were extrapolated from these 800 codes. For extrapolation, codes were assigned to families of codes, and small groups of physicians who had participated in the previous development stages developed the relative work values. Before the phase-in of the physician work RVUs could begin in 1992, HCFA had to create a process to both refine the existing values and create values for new procedure codes in the future. HCFA’s early refinement process involved using Medicare carrier medical directors to revise some of the newly created work RVUs and to assign RVUs to some low-volume codes and other codes not included in the Harvard study. Today, a different refinement process is in place that includes a multispecialty committee known as AMA’s Specialty Society Relative Value Scale Update Committee (RUC). The RUC, created in 1991, makes recommendations to HCFA on the relative values to be assigned to new or revised procedure codes. HCFA then convenes a meeting of selected medical directors from its claims processing contractors to review the RUC’s recommendations. Currently, HCFA accepts most of these recommendations. According to AMA representatives, the RUC process is supported by most physicians and has increased the medical community’s confidence in the physician work RVUs. Until January 1999, the practice expense component of the fee schedule was still calculated according to a charge-based system set up in 1989. Two main data sources were used: Medicare claims and allowed charge data from 1991 and information on the percentage of revenue used on practice expenses from national surveys of physicians, specialists, and nonphysician practitioners reimbursed under Medicare’s fee schedule. The RVUs for practice expenses were computed as follows: 1. Using national survey data, determine the average proportion of revenue devoted to practice expenses for physicians overall, for various specialties, and for the nonphysician practitioners paid under Medicare’s fee schedule. 2. Using 1991 Medicare allowed charges, multiply the allowed charge for each procedure code by the average percentage of revenue devoted to practice costs for the specialty that performs that procedure. Example: For a service with a 1991 allowed charge of $100 performed only by family practitioners (whose practice expense-to-revenue proportion is 52.2 percent), the calculation would be as follows:$100 x 0.522 = 52 (initial dollar) RVUs 3. For procedures performed by more than one specialty, multiply the practice expense proportion by the frequency with which each specialty performs that service and then add the product and multiply by the 1991 allowed amount. Example: For a service with a 1991 allowed charge of $100 performed 70 percent of the time by family practitioners and 30 percent of the time by internists (whose practice expense to revenue proportion is 46.4 percent), the calculation would be as follows: ((0.522 x .70) + (0.464 x .30)) x 100 = 50.5 (initial dollar) RVUs Malpractice RVUs are still computed under a similar statutory formula. HCFA adjusts the physician work, practice expense, and malpractice expense RVUs before they can be converted to dollars. Specifically, HCFA computes a geographic adjustment factor for each of the three types of RVUs; each factor is designed to reflect variation in the costs of the relevant component from the national average within fee schedule areas established by HCFA. After the three RVU components for each service are multiplied by their respective geographic adjustment factors and combined, the uniform national conversion factor is applied. This factor converts each total RVU into a dollar amount representing Medicare’s total allowed amount for each service. Medicare pays 80 percent of this amount, and the beneficiary copayment is 20 percent (once the annual deductible is met). The conversion factor is computed in a manner to ensure that budget neutrality is maintained and that total Medicare expenditures for physicians’ services will not differ by more than $20 million from what the expenditures would have been if the current fee schedule had not been adopted. This appendix details HCFA’s original and revised methodologies for creating resource-based practice expense payments that were contained in Federal Register notices of June 18, 1997, June 5, 1998, and November 2, 1998. Additional details of HCFA’s first proposal can be found in our February 27, 1998, report. In response to the Social Security Act Amendments of 1994 that required HCFA to develop resource-based practice expense payments that considered the staff, medical equipment, and medical supplies used to provide services and procedures, HCFA officials and researchers met in the spring of 1994 to discuss potential approaches. From these discussions, HCFA decided to develop separate estimates of the direct and indirect expenses associated with individual procedures. HCFA convened 15 clinical practice expert panels (CPEP), organized by specialty, to estimate the direct practice expenses associated with procedures. Each panel included 12 to 15 members, about half of whom were physicians; the remaining members were practice administrators and nonphysician clinicians, such as nurses. The CPEPs reviewed more than 6,000 procedures and developed estimates of the type and quantity of nonphysician labor, medical equipment, and medical supplies required to perform each procedure. A HCFA contractor then estimated the dollar costs of these inputs for each procedure. Next, HCFA applied a series of adjustments to the direct expenses estimated by the CPEPs. First, HCFA reviewed the data to ensure that the costs arrived at were allowable under Medicare policy and revised the costs as necessary. Next, HCFA used a statistical “linking” methodology that adjusted the estimates from different CPEPs to put them on a common scale and make them directly comparable. HCFA also applied a scaling adjustment to the revised CPEP estimates to make them consistent with national practice expense data collected by AMA through its Socioeconomic Monitoring System (SMS) survey. The aggregate CPEP estimates for labor, equipment, and supplies each accounted for a different portion of direct expenses than the estimates from the SMS survey data. Therefore, HCFA inflated the CPEP labor expenses for each code by 21 percent, inflated CPEP medical supply expenses by 6 percent, and deflated CPEP medical equipment expenses by 61 percent. Lastly, HCFA adjusted estimates that appeared to be unreasonable. HCFA allocated indirect expenses (such as the cost of rent and utilities) to individual procedures based on the physician work, direct practice expense, and malpractice expense RVUs associated with the procedure. See figure II.1 for a summary of this methodology. The Balanced Budget Act of 1997 provided additional direction to HCFA for developing the new practice expense RVUs. It required that HCFA use, to the maximum extent practicable, generally accepted cost accounting principles that recognize all staff, medical equipment, and medical supplies, not just those that could be tied directly to specific procedures.This requirement, and comments on its first proposed rule, led HCFA to recommend a revised approach for establishing practice expense RVUs that it described in a June 5, 1998, Federal Register notice and then in its final rule of November 2, 1998. The new approach begins with the total annual practice expenses incurred by individual medical specialties, such as cardiology, family practice, and thoracic surgery, and then allocates these expenses to individual procedures performed by that specialty. There are three basic steps in HCFA’s top-down approach: (1) for each specialty, estimate the total annual practice expenses for six different practice expense categories; (2) allocate a specialty’s total practice expenses to individual procedures performed by the specialty; and (3) compute a weighted average of the expenses for procedures performed by multiple specialties. Figure II. 2 summarizes HCFA’s revised approach. Figure II. 3 provides a detailed example, by step, of how the practice expense component is calculated. Step 1. For each specialty, estimate the average annual practice expenses for six different practice expense categories. HCFA developed estimates for each specialty of the total annual practice expenses associated with treating Medicare patients for three direct expense categories—clinical labor, medical equipment, and medical supplies—and three indirect expense categories—administrative labor, office expenses, and other expenses. The incurred costs reported on the SMS survey for each type of practice expense were used to determine their proportion of the total for each specialty. The following formula summarizes how HCFA developed these estimates for each expense category: Total annual practice expenses for treating Medicare patients (cost pool) = (average practice expenses/patient care hours) X hours spent treating Medicare patients for all procedures performed by the specialty HCFA developed ratios, for each specialty, of the average practice expenses incurred per hour of a physician’s time spent in patient care activities for each of the six expense categories. Estimates of the total annual physician practice expenses and average hours physicians worked per year in patient care activities were obtained from AMA’s 1995-97 SMS surveys. HCFA estimated the number of hours physicians spent treating Medicare patients by specialty. For each procedure, the number of times that procedure is performed by a specialty is multiplied by the amount of time physicians require to perform the procedure; HCFA then summed the results for all procedures performed by the specialty. HCFA used its Medicare claims data to determine Medicare volume for procedures performed by different specialties. The estimated time a physician spends in performing each procedure is a component of the physician work RVUs. The SMS does not include as many physician specialties as HCFA recognizes, nor does it include nonphysician specialties, such as podiatry and optometry. As a result, HCFA had to use the SMS data from similar specialties to estimate the practice expenses per hour for specialties not included in the SMS, a process it called “crosswalking.” HCFA also had to crosswalk specialties whose SMS samples were too small to develop their own practice expense per hour ratios. HCFA used clinical judgment to determine appropriate crosswalks for most of these specialties. For example, to determine the practice expense cost pools for colorectal surgeons, psychologists, and chiropractors, HCFA used the SMS practice expense per hour data for general surgeons, psychiatrists, and internal medicine, respectively. An example may help illustrate this first step in HCFA’s methodology.Assume that, on average, all cardiology practices spend $30 in clinical labor for each hour of direct patient care that a cardiologist performs in the practice. Also assume that all cardiologists nationwide spent a total of 20 million hours treating Medicare patients. Multiplying $30 per hour times 20 million hours results in a clinical labor cost pool for cardiologists of $600 million. If the cost pools for the five other expense categories add to $1.4 billion, this creates a total cost pool for cardiologists of $2 billion. Step 2. Allocate a specialty’s total practice expenses to individual procedures. Step 2 involves allocating a specialty’s total practice expense cost pool to the procedures that the specialty performs. In our example, this would mean allocating the $2 billion cardiology cost pool to the procedures cardiologists perform, such as echocardiograms and cardiac stress tests. HCFA used two allocation approaches. HCFA treated the clinical labor, medical equipment, and medical supply expense categories as direct expenses and allocated them to procedures using the CPEP data. HCFA used the CPEP data on clinical labor by procedure to allocate the clinical labor cost pool to procedures, the CPEP data on medical equipment by procedure to allocate the medical equipment cost pool to procedures, and the CPEP data on medical supplies by procedure to allocate the medical supply cost pool to procedures. In cases in which two or more CPEPs developed estimates for the same procedure, HCFA simply averaged the different CPEPs’ estimates. For example, if the CPEP estimated that a cardiac stress test required five times more clinical labor than an echocardiogram, then an individual stress test would receive five times the dollars from the clinical labor cost pool. HCFA treated administrative labor, office expenses, and other expenses as indirect expenses and used a combination of the fee schedule’s physician work RVUs associated with a procedure and the direct practice expense estimates for clinical labor, medical equipment, and medical supplies to allocate the three indirect expense cost pools to the procedures performed by a specialty. To continue with our example, assume that the cardiology cost pools for administrative labor, office expenses, and other expenses add to $1 billion. If a cardiac stress test has a combination of CPEP estimates and physician work RVUs that is twice as large as the combination for an echocardiogram, then the stress test procedure would receive twice as many dollars from the $1 billion pool as the echocardiogram. By adding the direct expense and indirect expense values assigned to a procedure, HCFA calculates the total amount of money to be assigned to a procedure. In our example, if the cardiac stress test has direct expenses of $150 and indirect expenses of $350, its total expenses would be $500. However, this is not the actual Medicare reimbursement. This process simply establishes relative ranks among procedures, which are later converted to payment levels. Step 3. Compute a weighted average of the expenses for procedures performed by multiple specialties. HCFA’s new approach creates separate practice expense estimates by specialty for procedures performed by multiple specialties. However, Medicare pays the same amount for a procedure to all physicians, regardless of specialty. HCFA therefore computed a weighted average practice expense, based on the frequency with which each specialty performs the procedure on Medicare patients. For instance, assume that, using HCFA’s methodology, the total expense for a cardiac stress test performed by a cardiologist is $500 but $400 when performed by a general surgeon and that the procedure is performed 60 percent of the time by cardiologists and 40 percent of the time by general surgeons. Medicare’s practice expense for this procedure would be $300 (or $500 times 0.6) plus $160 (or $400 times 0.4) for a total of $460. When aggregated, the overall effect of weighted averaging is to redistribute practice expenses between the various specialties. In our example, Medicare’s payments to cardiologists for a cardiac stress test are reduced by $40, from $500 to $460, while payments to general surgeons are increased from $400 to $460, a $60 gain. For most specialties, HCFA estimated that weighted averaging in the aggregate did not have a large effect on a specialty’s cost pool; their cost pool would be no more than 10-percent greater or 10-percent less than it would have been without weighted averaging. Once HCFA calculated the weighted average practice expense for each procedure, it ranked the procedures by total practice expenses and converted the rankings into practice expense RVUs. These rankings are then converted into actual payment amounts. Physicians’ groups have raised issues about virtually every aspect of HCFA’s new approach for developing resource-based practice expense RVUs. A number of their issues are discussed here. As discussed earlier in this report, we believe that HCFA should conduct sensitivity analyses to identify the changes to its methodology and data that would have the greatest effects on the new RVUs and target its refinement efforts on those areas. Where possible, data should be used to support any changes. It is likely, however, that a few issues raised cannot be addressed because the necessary data do not exist. Other suggested revisions may not be consistent with HCFA’s methodology. Several physicians’ groups questioned HCFA’s use of the original CPEP estimates rather than the adjusted CPEP estimates or other data to allocate the practice expense cost pools to procedures performed by a specialty. Some groups suggested that HCFA use the validation panel estimates as allocators because they believe these estimates are more accurate.Urology representatives said that they want to develop their own data for use in place of the CPEP estimates. HCFA said that it used the CPEP estimates for two reasons. First, commenters on its first proposed rule objected to the reasonableness edits HCFA made to the original CPEP data. Second, HCFA was not convinced that changes the validation panels made to the CPEP estimates were appropriate. The question of substituting other data for selected specialties as discussed above is complex. Specialties would likely argue that HCFA should use the data—CPEP, validation panel, or their own—that are most advantageous to them. This would lead to the use of a “patchwork” of different data sources as allocators for different specialties. Also, data developed by a society to replace the CPEP estimates could contain biases that would increase that society’s cost pool and decrease other societies’ pools. HCFA officials said that they are open to adjusting the CPEP estimates or accepting alternative data from specialties during the refinement period if the new data do not significantly affect specialties’ cost pools. Another CPEP-related issue concerns how HCFA calculated expenses for several hundred redundant codes—codes reviewed by two or more CPEPs. In its revised methodology, HCFA simply averaged the original CPEP estimates that had been developed for these codes. HCFA did not use this approach in its original proposal because averaging different results would have distorted the relative ranks of codes within a CPEP. For example, an intermediate procedure might end up having more RVUs than a complicated procedure. HCFA’s final rule notes that HCFA will review this issue during the 3-year phase-in period. During that time, HCFA could evaluate using the original or adjusted CPEP estimates for the specialty that most frequently provides a procedure—the dominant specialty. In addition to the generally recognized limitations with the SMS data discussed in the report, there is a problem related to outliers—cases that seem unreasonable or that far exceed the norm. After review and analysis, some of these values may need to be adjusted during the refinement period. For example, AMA already excluded three cases in the SMS data in which physicians reported working in direct patient care 24 hours per day, 7 days a week. There are still extreme cases, however, such as physicians working an average of 16 hours or more per day every day of the week. Other outliers can be seen in table III.1, which shows some extremely high practice expenses per hour compared with the mean and median practice expenses per hour for a specialty. In one case, a physician reported practice expenses per hour of $964—14 times the mean for the specialty and equivalent to paying each nonphysician staff member an average of $148,000 annually. An AMA representative suggested that the respondent may have provided total expenses for the practice rather than his or her portion of them. It is important for HCFA to review and, where necessary, adjust the SMS data, since a few atypical cases can have a measurable effect on the practice expense per hour calculations, especially for specialties with small sample sizes. As a result of the outliers, the mean practice expenses per hour for these and other specialties are considerably higher than the median values. In situations such as this, in which the SMS data contain large extremes, the median is considered a better measure of the typical value of the population because the influence of the outliers is reduced. A HCFA official said that HCFA used the mean because it accounts for all the expenses physicians reported on the SMS survey, including the high and low responses. HCFA’s final rule identifies this as an issue to be reviewed during the 3-year phase-in period. In this review, HCFA needs to develop alternatives, analyze the effect of any changes, and decide how to proceed. As noted above, HCFA adjusted oncologists’ SMS supply expenses because Medicare pays separately for certain drugs. A similar issue involves the expenses of staff, primarily nurses, who accompany physicians to the hospital. These staff reportedly perform such duties as assisting physicians at surgery, assessing patients following surgery, and educating patients. As we noted in our first report, HCFA appropriately disallowed nearly all such expenses from the CPEP data under its original methodology because Medicare pays for these expenses through other mechanisms. To include them would result in Medicare’s paying for the same expenses twice. To the extent that this practice is occurring, the costs associated with these staff are included as practice expenses in the SMS survey data. HCFA officials said that they believe that this is not a common practice; in addition, these costs are not easily identifiable in the SMS data. They also said that including these expenses in the CPEP estimates under their revised methodology affects only specialties that perform the particular procedures. That is, the CPEP data affect not the size of a specialty’s cost pool but only how the pool is allocated to the procedures performed by the specialty. However, the American Academy of Family Physicians correctly notes that including these expenses in HCFA’s calculations has a ripple effect across all specialties and could affect the relative values of office-based and surgical procedures. However, it is unclear whether excluding these costs would significantly change the new RVUs. Sensitivity analyses would provide HCFA with a sound basis for including or excluding these expenses as part of its revised methodology. HCFA could estimate the expenses associated with this practice using the CPEP estimates and could decide if it should spend the time and effort to determine how to remove these costs from the SMS data. In other words, HCFA should not spend a lot of time and effort on this issue if it has little effect on the RVUs. If HCFA removes these costs from the SMS data, it should also remove them from the CPEP data. HCFA’s calculations of practice expenses per hour are based in part on the time that physicians spend in patient care activities. Some specialties make greater use of nonphysician practitioners, such as nurse assistants and optometrists, and may benefit from this step in the methodology. This is because the salaries and expenses of the nonphysician practitioners are counted as a practice expense and because by using these staff, physicians can generate more billable procedures. These two factors result in higher practice expenses per hour for their specialties. HCFA appropriately acknowledged that this is an issue for review during the refinement period. The American Association of Neurological Surgeons-Congress of Neurological Surgeons said that the methodology has disadvantages for medical specialties whose physicians work longer hours in patient care activities compared with other specialties. The SMS survey asks physicians to record the number of hours they spent in patient care activities, and HCFA uses the average for a specialty in its calculations. As the number of hours spent in patient care activities increases under HCFA’s new methodology, the practice expenses per hour decreases (assuming that total expenses remain constant), resulting in a smaller practice expense cost pool for a medical specialty. Rather than base its calculations on the average number of hours that physicians in a specialty work, this physicians’ group believes that HCFA should use a constant 40 hours per week for all specialties. They argue that most practice expenses are generated when the office is open and that this would be a better measure for HCFA to use. Using a constant number of hours would increase the practice expense per hour estimates for physicians working more hours. However, this approach would be inconsistent with HCFA’s overall methodology, which assumes that Medicare claims data reflect physicians’ hours that are consistent with those reported on the SMS survey. Physicians’ groups also commented on the physician time data that HCFA uses to determine the total number of hours physicians spend treating Medicare patients. First, some physicians’ groups question HCFA’s adjustments to the physician time data. These data come from two sources: (1) a Harvard University study that developed physician time estimates for codes in existence when the work RVUs were originally created and (2) RUC estimates developed for new codes created subsequent to the Harvard study and for older codes that required adjustment. HCFA found that the RUC’s time estimates were systematically greater by an average of about 25 percent than those developed from the Harvard study for the same codes. HCFA therefore increased the Harvard time estimates by this amount on average to ensure consistency between the two data sources. According to the RUC, however, this adjustment may not be appropriate. RUC time estimates may be higher because procedures are performed differently now than they were at the time of the Harvard study. RUC representatives said that they would like more information on HCFA’s adjustments to ensure that they are appropriate. HCFA is also concerned about the accuracy of the physician time data for high-volume codes that have relatively little physician time associated with them. For example, if a high-volume procedure typically takes 4 minutes to perform but has 5 minutes of physician time assigned to it in the work RVUs, the procedure’s share of the practice expense pool for the specialty is inflated by 25 percent. HCFA has appropriately expressed a willingness to review comments during the refinement period on potential inaccuracies with these data and to make adjustments where appropriate. Several physicians’ groups criticized HCFA’s use of Medicare claims data, rather than national claims data for all insurers, to establish and allocate the practice expense cost pools for specialties. HCFA officials acknowledged that it would be preferable to use data more representative of physicians’ entire practices. The American Academy of Family Physicians is concerned that specialties that typically do not treat Medicare patients, such as pediatricians and obstetricians, will be disadvantaged because most of their procedures are not provided to Medicare patients and therefore are not included in the Medicare claims data. Specialties with smaller values of Medicare claims, however, may benefit from this aspect of HCFA’s method. Only having the more complete data would allow HCFA to determine the effect. However, such data are not available, and none of the medical societies identified specific sources of data that HCFA could use. Several physicians’ groups suggested that HCFA refine the Medicare claims data, citing inaccuracies. For example, in 1996 Medicare paid almost 32,000 claims for lumbar discectomies (CPT code 63030), a procedure typically performed by neurosurgeons or orthopedic surgeons. However, the data include 835 claims paid to physicians’ assistants and 102 claims paid to general practitioners for this procedure. According to the American Association of Neurological Surgeons-Congress of Neurological Surgeons, nonsurgical specialties should not be performing lumbar discectomies. Given the millions of claims Medicare pays annually, a small percentage of errors with these data are not unexpected. Further, there is no reason to believe that these errors are not evenly distributed among specialties and therefore would likely have minimal effect on the final RVUs. However, if medical specialties demonstrate significant problems with these data, HCFA said that it will review them during the phase-in period and make necessary adjustments. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Health Care Financing Administration's (HCFA) ongoing efforts to develop resource-based practice expense relative value units (RVUs), focusing on: (1) whether the new methodology is an acceptable approach for revising Medicare's fee schedule; (2) questions about the data, assumptions, and adjustments underlying the new methodology that need to be addressed during the 3-year phase-in period; and (3) the need for future updates to the practice expense RVUs to reflect changes in health care delivery and for ongoing assessments of the fee schedule's effect on Medicare beneficiaries' access to physicians' care. GAO noted that: (1) HCFA's new methodology represents an acceptable approach for calculating RVUs; (2) HCFA relied on the best data available for creating the new values: (a) a nationally representative survey of physicians' practice costs; and (b) data developed by panels of experts that identify the specific resources associated with individual procedures; (3) HCFA's original and new proposals use these data in similar ways to create the new RVUs; (4) a critical difference is that the new methodology more directly recognizes the variation in practice expenses among physicians' specialities in computing the RVUs; (5) additionally, this methodology responds to several concerns GAO had with the original one; (6) while HCFA's new methodology is acceptable overall, certain questions about the data and underlying methodology need to be addressed before the new RVUs are completely phased in; (7) for example, the national practice expense survey database contains limited data for some specialties and may lead to imprecise estimates of their practice expenses; (8) for other specialities not included in the survey database, HCFA had to use proxy information, the appropriateness of which needs to be verified; (9) also, HCFA made certain assumptions and adjustments without confirming their reasonableness; (10) for example, HCFA adjusted the supply cost estimates for oncologists to avoid paying them twice for chemotherapy drugs but HCFA has not yet collected data to determine the appropriate size of the adjustment; (11) to address these issues, HCFA needs a strategy for refining the practice expense RVUs during the 3-year phase-in period that focuses on the data and methodology weaknesses that have the greatest effect on the RVUs; (12) however, HCFA has done little in the way of sensitivity analysis to effectively target its refinement efforts; (13) additionally, HCFA has not developed permanent processes for future updates and revisions to the practice expense RVUs as new procedures are developed or methods of performing existing procedures shift; and (14) finally, HCFA needs to continue monitoring beneficiaries' access to physicians' care to ensure that access is not compromised by past and ongoing changes to Medicare's payments to physicians.
TBI can be classified as mild, moderate, or severe based on specific criteria, such as the length of time an individual is unconscious following their injury. For example, an individual would meet the criteria for mild TBI if they suffered a loss of consciousness for 30 minutes or less. Similarly, an individual would meet the criteria for moderate TBI if they lost consciousness for more than 30 minutes and for severe TBI if they lost consciousness for more than 24 hours. (See table 1.) Early detection of injury is critical in the management of TBI patients. The diagnosis of moderate and severe TBI usually occurs in a timely manner due to the visible nature of the head injury, as well as the duration of symptoms, such as memory loss. Identification of mild TBI, also known as a concussion, can be challenging because there may be no visible head injury, and symptoms may be minimal and brief. In addition, in the combat theater, a mild TBI may not be identified if it occurs at the same time as other combat injuries that are more visible or life-threatening, such as orthopedic injuries or open wounds. Furthermore, some of the symptoms of mild TBI, which account for the majority of these injuries in the military, are similar to those associated with other conditions, such as PTSD. Individuals sustaining mild TBIs often report physical, cognitive, and emotional or behavioral symptoms referred to collectively as postconcussion symptoms. The most commonly reported postconcussion symptoms are headache, dizziness, decreased concentration, memory problems, irritability, fatigue, visual disturbances, sensitivity to noise, judgment problems, depression, and anxiety. Although the majority of individuals with mild TBI have symptoms that resolve within 1 month, some symptoms may persist for months to years following injury, potentially becoming permanent and causing disability. When these symptoms are persistent, they are often referred to as postconcussion syndrome or persistent postconcussion symptoms. PTSD can develop following exposure to life-threatening events, natural disasters, terrorist incidents, serious accidents, or violent personal assaults and may have a delayed onset, which is described as a clinically significant presentation of symptoms at least 6 months after exposure to trauma. Individuals diagnosed with PTSD may experience problems sleeping, maintaining relationships, and returning to their previous civilian lives. They may also suffer from other ailments, such as depression and substance abuse. PTSD is one of the most prevalent mental disorders arising from combat. for clinical purposes was “100 percent oxygen” instead of “near 100 percent oxygen” Hyperbaric Oxygen Therapy Indications, 12th Edition, 2008. However, the latest edition remained consistent that pressurization for clinical purposes should be 1.4 ATA or higher. FDA approves drugs and approves/clears devices for specific indications (diseases or medical conditions). However, it does not generally prevent healthcare practitioners’ use or prescribing of approved or cleared drugs/devices for indications for which the drugs/devices have not been approved or cleared. FDA has approved/cleared both the drug (oxygen) and the device (hyperbaric chamber) used in HBO therapy for certain medical uses, such as treating decompression sickness suffered by divers. As described by FDA officials, in order for HBOTBI or PTSD, sponsors would be required to submit a new marketing application or applications to support the new indications. Such submissions could be made in one of two ways: 1) the sponsor could submit one marketing application to FDA’s Center for Drug Evaluation and Research for both the drug and the device; or 2) the sponsor could submit a new drug application to FDA’s Center for Drug Evaluation and Research to add the new oxygen indication for the drug and a device premarket submission to FDA’s Center for Devices and Radiological Health to add the new indication for existing hyperbaric oxygen chambers. Typically before a marketing application submission, the sponsor meets with FDA to discuss the most appropriate investigational studies and the details of the regulatory process. If existing literature and data are available to address the scientific and technical questions, it might be possible to submit this information only, and new investigational studies might not be needed. If new investigational studies are needed, an investigational new drug (IND) application would be used. The IND would include information on both the drug and the device. The IND must also include a “proposed indication(s) for use” section that explains what the drug (or drug-device combination product) does and the clinical condition and population for which it is intended. The IND sponsor has overall responsibility for the conduct of clinical studies that would support use for the proposed indication. Typically, the investigational process is iterative and begins with early studies that focus on safety and dosing. Later studies generally focus on safety and effectiveness of the product. When the applicant completes the investigational studies and submits the necessary marketing application for review, FDA first determines whether the submission is complete and provides the information necessary for review. For example, the submission would provide information regarding the product description, proposed indications and potential clinical benefits, as well as the data needed to support approval or clearance, labeling, dose of the drug, and device instructions for use. FDA reviews the marketing application to determine such things as whether 1) the drug-device combination is safe and effective for its proposed use, which is answered, at least in part, by whether the benefits of the treatment outweigh its risks; 2) the proposed labeling meets the applicable regulatory requirements; 3) the methods used to manufacture the drug are adequate to preserve the drug’s identity, strength, quality, and purity; and 4) the methods used to maintain device functionality and performance are acceptable. If two marketing applications are used as in the second option noted above, FDA would generally issue the marketing authorizations concurrently when they are approved. Most of the 32 peer-reviewed, published articles that we identified examined the use of HBO therapy for treating TBI: 29 focused solely on TBI, 2 focused on both TBI and PTSD, and 1 focused solely on PTSD. The 32 articles we identified included 7 case reports, 10 literature reviews, and 15 articles on interventional studies or clinical trials. Case reports are collections of reports on the treatment of individual patients. Six of the seven case reports we reviewed found that the patients with TBI (mild, moderate, severe, or not specified) or PTSD improved after treatment. The remaining case report noted safety issues to consider when treating TBI patients with cranial fractures. Literature reviews use a search to identify studies on a specific clinical topic. Of the 10 literature reviews we identified, 1 concluded based on the articles it had identified that HBO therapy for treating TBI (severe and not specified), and all 3 concluded that the therapy is safe. The remaining 12 articles evaluated the effectiveness of HBO The 8 articles on mild TBI had differing conclusions on the effectiveness of HBO therapy, while the other 4 articles (two on severe TBI and two that did not specify severity) reported that this therapy was an effective treatment for these conditions. (For more detailed information about the articles and their conclusions, see appendix III for case reports, appendix IV for literature reviews, and appendix V for interventional studies or clinical trials.) The eight articles on interventional studies or clinical trials that were focused on treating mild TBI had different conclusions—six articles concluded that it was not effective and two concluded that it was. The six articles that concluded HBO Each of the three studies was affiliated with a branch of military service—Army, Navy, or Air Force. The remaining two articles were based on two studies conducted by researchers in Israel and the United States. The differences in the articles’ conclusions about the effectiveness of HBO therapy for treating mild TBI are based, in part, on methodological differences, as well as differences in researchers’ interpretations of the studies’ results. All of the DOD-funded studies were randomized, double- blinded, and included a sham control group in which participants received a procedure that is similar to the HBO For a sham control group in HBO therapy studies, some atmospheric pressure within the hyperbaric chamber is required for participants to perceive they are receiving treatment. However, there is no standard sham control group design for HBO therapy was not effective in treating mild TBI and related symptoms because participants in the sham control and treatment groups had similar outcomes. Although both groups of participants showed improvement, the authors concluded that the improvement was likely attributable to other factors, such as a placebo effect, and not to HBO DOD officials and researchers involved with the studies told us that they believe some improvements were due to factors, such as being away from home and everyday stress. therapy from this study. other researchers not affiliated with the DOD-funded studies have reported that hyperbaric treatments at 1.2 ATA (without increased oxygen levels), which was used as the sham treatment in one DOD-funded study, substantially increase the amount of dissolved oxygen in the blood and simultaneously induces other physiological changes. As a result, these other researchers stated that the sham control group’s treatment does not represent a true sham or placebo, and participants’ improvement shows that even a small increase in pressure is an effective treatment. In a published editorial, the researchers who conducted the DOD-funded study that used a sham control group with hyperbaric treatments at 1.2 ATA responded to these concerns by reporting that they recognize that the sham treatment may have caused physiologic effects from slight increases in oxygen, nitrogen, and direct pressure, as well as a variety of other effects. They added that the study was not designed to separate these effects, but rather to show a benefit of HBO therapy, and found it was not effective because the initial improvements identified after the treatments were not sustained. Nonetheless, DOD officials told us that there have been no studies completed on the long-term effects of the treatment, and such studies would help confirm whether the sham control groups’ improvements should be attributed to placebo effects or other factors. The remaining two (of eight) articles on mild TBI, which concluded that HBO The study conducted in the United States was not blinded and did not use a control group. DOD and VA researchers and other subject matter experts told us that these studies were not designed with the same methodological rigor as the DOD-funded studies on mild TBI because they were not blinded, randomized clinical trials, and they did not use sham control groups— qualities that help ensure the validity of a study’s findings. The group of researchers in Israel noted in their article that they did not use a sham control group because it was difficult to design a treatment for the control group that would not be considered therapeutic. The researchers for the other study that was conducted in the United States noted in their article that a sham control group was not used because this was preliminary work, and further work would be needed to confirm the study’s findings. In addition to the 10 published literature review articles we identified, both VA and DOD conducted their own literature reviews on the effectiveness of HBO therapy to treat TBI (all severities); VA’s literature review also included PTSD. In developing the department’s policy on the use of HBO VA’s report concluded that high quality, well-designed research is needed to determine the efficacy and effectiveness of HBO therapy for both of these conditions. In 2014, VA updated its report to include articles published between January 1, 2010 and January 17, 2014. In its updated report, VA concluded that none of the well-designed studies it reviewed, including the DOD-funded studies, demonstrated that HBO Further, the report noted that the 2013 Israeli study—which found the treatment to be effective for improving cognitive performance and self-perceived quality of life—did not have adequate controls, such as a sham control group. therapy to treat stroke. The review concluded that there was no new scientific evidence indicating that HBOtherapy, and 3) the outcome measures were susceptible to improving based upon the participants’ expectations as to whether they should improve or not. The paper noted that the last criticism could explain many of the reported improvements in cognitive performance. severities of TBI. With respect to mild TBI, it noted that the improvements in outcomes shown within the groups receiving HBO therapy. Specifically, the report stated that a true sham treatment may have to be done with normal atmospheric pressure and that more research is needed to be confident that a sham is not a therapeutic treatment. The report concluded that further research, such as conducting studies comparing other types of interventions to HBO therapy to treat PTSD. As noted in our report, PTSD was only included in three articles that we reviewed, and as a result, our description of research on HBOof Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VIII. To identify and describe published research on the use of hyperbaric oxygen (HBO) therapy in the treatment of traumatic brain injury (TBI) or post-traumatic stress disorder (PTSD), we conducted a literature search for relevant articles published during the most recent 10-year period, from January 1, 2005 through April 6, 2015. Our librarian searched more than 30 databases for research published in relevant peer-reviewed and industry journals including Academic One File, ArticleFirst, BIOSIS Previews, CINAHL, Embase, MEDLINE, NTIS: National Technical Information Service , PILOTS: Published International Literature on Traumatic Stress, PsycINFO, and WorldCat. Key search terms included various combinations of “hyperbaric oxygen,” “hyperbaric oxygen therapy,” “hyperbaric oxygen treatment,” “traumatic brain injury,” and “post-traumatic stress disorder.” From all database sources, 230 abstracts were identified. therapy was not at least one treatment used in the research, the abstract was from a conference and not available in a full article, or the abstract was from a book. severity of TBI and then by study location). As part of our work, we examined the methodologies of each of these studies and determined that they were sufficiently reliable for the purposes of our report. Case Reports vs (7 Articles) Hardy, P.G., K.M. Johnston, L. DeBeaumont, D.L. Montgomery, J. M., Lecomete, J.P. Soucy, D. Bourbonnais, and M. Lassonde. “Pilot Case Study of the Therapeutic Potential of Hyperbaric Oxygen Therapy on Chronic Brain Injury.” Journal of Neurological Sciences, vol. 253 (2007). Harch, P., E.F. Fogarty, P.K. Staab, and K. Van Meter. “Low Pressure Hyperbaric Oxygen Therapy and SPECT Brain Imaging in the Treatment of Blast-Induced Chronic Traumatic Brain Injury (Post-Concussion Syndrome) and Post Traumatic Stress Disorder: A Case Report.” Cases Journal, vol. 2 (2009). Wright, Col. J,K., E. Zant, K. Groom, R.E. Schlegel, and K. Gilliland. “Case Report Treatment of Mild Traumatic Brain Injury with Hyperbaric Oxygen.” Undersea & Hyperbaric Medicine, vol. 36, no. 6 (2009). Eovaldi, B., and C. Zanetti. “Hyperbaric Oxygen Ameliorates Worsening Signs and Symptoms of Post-Traumatic Stress Disorder.” Neuropsychiatric Disease and Treatment, vol. 6 (2010). Lv, L.Q., L.J. Hou, M.K. Yu, X.H. Ding, X.Q. Qi, and Y.C. Lu. “Hyperbaric Oxygen Therapy in the Management of Paroxysmal Sympathetic Hyperactivity after Severe Traumatic Brain Injury: A Report of 6 Cases.” Arch. Phys. Med. Rehabil., vol. 92 (2011). Stoller, K. “Hyperbaric Oxygen Therapy (1.5 ATA) in Treating Sports Related TBI/CTE: Two Case Reports.” Medical Gas Research, vol. 1 (2011). Lee, L.C., F.K. Lieu, Y.H. Chen, T.H. Hung, and S.F. Chen. “Tension Pneumocephalus as a Complication of Hyperbaric Oxygen Therapy in a Patient with Chronic Traumatic Brain Injury.” American Journal of Physical Medicine & Rehabilitation, vol. 91, no. 6 (2012). Literature Reviews (10 Articles) Adamides, A.A., C.D. Winter, P.M. Lewis, D.J. Cooper, T. Kossmann, and J.V. Rosenfeld. “Current Controversies in the Management of Patients with Severe Traumatic Brain Injury.” ANZ. J. Surg., vol. 76 (2006). Bennett, M.H., B.E. Trytko, and B. Jonker. “A Systematic Review of the Use of Hyperbaric Oxygen Therapy in the Treatment of Acute Traumatic Brain Injury.” Diving and Hyperbaric Medicine, vol. 36, no. 1 (2006). Rockswold, S.B., G. L. Rockswold, and A. Defillo. “Hyperbaric Oxygen in Traumatic Brain Injury.” Neurological Research, vol. 29 (2007). Kumaria, A., and C.M. Tolias. “Normobaric Hyperoxia Therapy for Traumatic Brain Injury and Stroke: a Review.” British Journal of Neurosurgery, vol. 23, no. 6 (2009). Huang, L., and A. Obenaus. “Hyperbaric Oxygen Therapy for Traumatic Brain Injury.” Medical Gas Research, vol. 1, no. 21 (2011). Bennett, M.H., B. Trytko, and B. Jonker. “Hyperbaric Oxygen Therapy for the Adjunctive Treatment of Traumatic Brain Injury (Review).” Cochrane Database of Systematic Reviews, vol. 12 (2012). Beyon, C., K.L. Kiening, B. Orakcioglu, A.W. Unterberg, and O.W. Sakowitz. “Brain Tissue Oxygen Monitoring and Hyperoxic Treatment in Patients with Traumatic Brain Injury.” Journal of Neurotrauma, vol. 29 (2012). McCrary, Col. B.F., L. Weaver, LCDR K. Marrs, Col. R. S. Miller, C. Dicks, K. Deru, N. Close, and Col. M. DeJong. “Hyperbaric Oxygen (HBO2) for Post-Concussive Syndrome/Chronic TBI Product Summary.” Undersea & Hyperbaric Medicine: Journal of Undersea and Hyperbaric Medical Society, vol. 40, no. 5 (2013). Cossu, G. “Therapeutic Options to Enhance Coma Arousal after Traumatic Brain Injury: State of the Art of Current Treatments to Improve Coma Recovery.” British Journal of Neurosurgery, vol. 28, no. 2 (2014). Wang, Y., D. Chen, and G. Chen. “Hyperbaric Oxygen Therapy Applied Research in Traumatic Brain Injury: From Mechanisms to Clinical Investigation.” Medical Gas Research, vol. 4, no. 18 (2014). Interventional Studies or Clinical Trials (15 Articles) Studies on Mild TBI (8 Articles) DOD Funded Study (Air Force): Wolf, G., D. Cifu, L. Baugh, W. Carne, and L. Profenna. “The Effect of Hyperbaric Oxygen on Symptoms after Mild Traumatic Brain Injury.” Journal of Neurotrauma, vol. 29 (2012). DOD Funded Study (Navy): Cifu, D.X., B.B. Hart, S.L. West, W. Walker, and W. Carne. “The Effect of Hyperbaric Oxygen on Persistent Postconcussion Symptoms.” Journal of Head Trauma Rehabilitation, vol. 29, no. 1 (2014). Cifu, D.X., K.W. Hoke, P.A. Wetzel, J. R. Wares, G. Gitchel, and W. Carne. “Effects of Hyperbaric Oxygen on Eye Tracking Abnormalities in Males after Mild Traumatic Brain Injury.” Journal of Rehabilitation Research and Development, vol. 51, no. 7 (2014). Cifu, D.X., W.C. Walker, S.L. West, B.B. Hart, L.M. Franke, A. Sima, C.W. Graham, and W. Carne. “Hyperbaric Oxygen for Blast-Related Postconcussion Syndrome: Three-Month Outcomes.” Annals of Neurology, vol. 75 (2014). Walker, W.C., L.M. Franke, D.X. Cifu, and B.B. Hart. “Randomized, Sham-Controlled, Feasibility Trail of Hyperbaric Oxygen for Service Members with Postconcussion Syndrome: Cognitive and Psychomotor Outcomes 1 Week Postintervention.” Neurorehabilitation and Neural Repair, vol. 28, no. 5 (2014). DOD Funded Study (Army): Miller, R.S., L. K. Weaver, N. Bahraini, S. Churchill, R.C. Price, V. Skiba, J. Caviness, S. Mooney, B. Hetzell, J. Liu, K. Deru, R. Ricciardi, S. Fracisco, N.C. Close, G.W. Surrett, C. Bartos, M. Ryan, and L.A. Brenner. “Effects of Hyperbaric Oxygen on Symptoms and Quality of Life among Service Members with Persistent Postconcussion Symptoms: A Randomized Clinical Trial.” JAMA Internal Medicine, vol. 175, no. 1 (2015). Boussi-Gross, R., H. Golan, G. Fishlev, Y. Bechor, O. Volkov, J. Bergan, M. Friedman, D. Hoofien, N. Shlamkovitch, E. Ben-Jacob, and S. Efrati. “Hyperbaric Oxygen Therapy Can Improve Post Concussion Syndrome Years after Mild Traumatic Brain Injury-Randomized Prospective Trial.” PLOS ONE, vol. 8, no. 11 (2013). Harch, P.G., S.R. Andrews, E.F. Fogarty, D. Amen, J.C. Pezzullo, J. Lucarini, C. Aubrey, D.V. Taylor, P.K. Staab, and K.W. Van Meter. “A Phase I Study of Low-Pressure Hyperbaric Oxygen Therapy for Blast- Induced Post-Concussion Syndrome and Post-Traumatic Stress Disorder.” Journal of Neurotrauma, vol. 29 (2012). Studies on Severe TBI (2 articles) Rockswold, S.B., G.L. Rockswold, D.A. Zaun, X. Zhang, C.E. Cerra, T.A. Bergman, and J. Liu. “A Prospective, Randomized Clinical Trial to Compare the Effect of Hyperbaric to Normobaric Hyperoxia on Cerebral Metabolism, Intracranial Pressure, and Oxygen Toxicity in Severe Traumatic Brain Injury.” Journal of Neurosurgery, vol. 112 (2010). Rockswold, S.B., G.L. Rockswold, D.A. Zaun, and J. Liu. “A Prospective, Randomized Phase II Clinical Trial to Evaluate the Effect of Combined Hyperbaric and Normobaric Hyperoxia on Cerebral Metabolism, Intracranial Pressure, Oxygen Toxicity, and Clinical Outcome in Severe Traumatic Brain.” Journal of Neurosurgery, vol. 118 (2013). Studies on Non-Specified TBI (2 Articles) Xia-yan, S., T. Zhong-quan, S. Da, and H. Xiao-ju. “Evaluation of Hyperbaric Oxygen Treatment of Neuropsychiatric Disorders Following Traumatic Brain Injury.” Chinese Medical Journal, vol. 119, no. 23 (2006). Sahni, T., M. Jain, R. Prasad, S.K. Sogani, and V.P. Singh. “Use of Hyperbaric Oxygen in Traumatic Brain Injury: Retrospective Analysis of Data of 20 Patients Treated at a Tertiary Care Centre.” British Journal of Neurosurgery, vol. 26, no. 2 (2012). Studies on Safety of Treating TBI with Hyperbaric Oxygen Therapy (3 articles) Gossett, W.A., G.L. Rockswold, S.B. Rockswold, C.D. Adkinson, T.A. Bergman, and R.R. Quickel. “The Safe Treatment, Monitoring and Management of Severe Traumatic Brain Injury Patients in a Monoplace Chamber.” Undersea & Hyperbaric Medicine, vol. 37, no. 1 (2010). Wolf, E.G., J. Prye, R. Michaelson, G. Brower, L. Profenna, and O. Boneta. “Hyperbaric Side Effects in a Traumatic Brain Injury Randomized Clinical Trial.” Undersea & Hyperbaric Medicine, vol. 39, no. 6 (2012). Churchill, S., L.K. Weaver, K. Deru, A.A. Russo, D. Handrahan, W.W. Orrison, J.F. Foley, H.A. Elwell. “A Prospective Trial of Hyperbaric Oxygen for Chronic Sequelae after Brain Injury.” Undersea & Hyperbaric Medicine, vol. 40, no. 2 (2013). Appendix II: Ongoing Interventional and Observational Studies on Hyperbaric Oxygen Therapy to Treat Traumatic Brain Injury (TBI) We obtained information about eight ongoing clinical trials on the use of hyperbaric oxygen therapy to treat TBI. Information on these trials was obtained through ClinicalTrials.gov, an international registry and results database of publicly and privately supported clinical studies of human participants, which is maintained by the National Institutes of Health. We identified six interventional clinical trials on hyperbaric oxygen therapy (see table 4). The remaining two ongoing studies are observational (see table 5). Interventional studies are studies in which participants are assigned to receive one or more interventions (or no intervention) so that researchers can evaluate the effects of the interventions on biomedical or health-related outcomes. The assignments are determined by the study protocol. Participants may receive diagnostic, therapeutic, or other types of interventions. An observational study is a clinical study in which participants identified as belonging to study groups are assessed for biomedical or health outcomes. Participants may receive diagnostic, therapeutic, or other types of interventions, but the researcher does not assign participants to specific interventions (as in an interventional study). Hyperbaric Oxygen Therapy and SPECT Brain Imaging in Traumatic Brain Injury (https://clinicaltrials.gov/show/NCT00594503 ) Paul G. Harch, M.D. (Louisiana State University Health Sciences Center in New Orleans) Brain Injury and Mechanisms of Action of HBO2 for Persistent Post-Concussive Symptoms after Mild Traumatic Brain Injury (BIMA) Protocol (https://clinicaltrials.gov/show/NCT01611194) Phase 1-2 Study of Hyperbaric Treatment of Traumatic Brain Injury (https://clinicaltrials.gov/show/NCT01847755) Barry Miskin, M.D. (Jupiter Medical Center, Florida) Hyperbaric Oxygen Therapy Treatment of Chronic Mild Traumatic Brain Injury/Persistent Post-Concussion Syndrome (https://clinicaltrials.gov/show/NCT02089594) A Double-Blind Randomized Trial of Hyperbaric Oxygen Versus Sham in Civilian Post- Concussive Syndrome (https://clinicaltrials.gov/show/NCT01986205) Lindell Weaver (Intermountain Health Care, Inc., Utah) Hyperbaric Oxygen Brain Injury Treatment (HBOIT) Trial (https://clinicaltrials.gov/show/NCT02407028) Gaylan Rockswold (Minneapolis Research Foundation) December 2015 to December 2017 to determine the optimal hyperbaric oxygen treatment paradigm to be instituted in terms of atmospheric pressure, frequency of treatment and whether normobaric hyperoxia following hyperbaric oxygen treatments enhances the treatment effect Development of Normative Datasets for Assessments Planned for Use in Patients with Mild Traumatic Brain Injury (NORMAL) (https://clinicaltrials.gov/show/NCT01925963) Lindell Weaver (Intermountain Health Care, Inc., Utah) Brain Angiogenesis (formation of new blood vessels) Induced by Hyperbaric Oxygen Therapy Can be Visualized by Perfusion MRI in Brain Injury Patients (https://clinicaltrials.gov/show/NCT02452619) We identified and reviewed seven articles on case reports. Six of the seven articles focused on the effectiveness of hyperbaric oxygen therapy in treating traumatic brain injury (TBI) or post-traumatic stress disorder (PTSD). The remaining article focused on safety issues. We identified and reviewed 10 articles based on literature reviews about the use of hyperbaric oxygen therapy in treating traumatic brain injuries. Eight of these articles noted that further research in the area was needed to determine if this treatment was effective. One article reported that hyperbaric oxygen therapy had positive effects when used to treat severe traumatic brain injury (TBI), another reported that the therapy can be delivered with relative safety for severe TBI. We identified and reviewed 15 articles on interventional studies or clinical trials. Of these, 12 articles were focused on the effectiveness of hyperbaric oxygen therapy in treating traumatic brain injury (TBI), including 8 articles on mild TBI (see table 8) and 4 articles on severe or non-specified TBI (see table 9). The remaining 3 articles are related to the safety of this treatment (see table 10). Debra A. Draper, Director, (202) 512-7114 or draperd@gao.gov. In addition to the contact name above, Bonnie Anderson, Assistant Director; Jennie Apter; Danielle Bernstein; Leia Dickerson; Natalie Herzog; Sylvia Diaz Jones; and Emily Wilson made key contributions to this report.
TBI and PTSD are signature wounds for servicemembers returning from the conflicts in Iraq and Afghanistan. Within the military, the majority of TBI cases have been classified as mild. Studies have found that one-third or more of servicemembers with mild TBI also have PTSD. As an alternative to traditional treatments, some researchers have studied the use of HBO2 therapy, which delivers higher levels of oxygen to the body inside of pressurized hyperbaric chambers to promote healing. The Joint Explanatory Statement accompanying the Consolidated and Further Continuing Appropriations Act, 2015, included a provision for GAO to review the use of HBO2 therapy to treat TBI and PTSD. This report identifies and describes published research on the use of HBO2 therapy for these conditions. GAO conducted a literature review for relevant articles published in peer-reviewed journals during the most recent 10-year period, from January 1, 2005 through April 6, 2015. GAO interviewed DOD, VA, and researchers affiliated with published articles, as well as other stakeholders, including officials with the Undersea and Hyperbaric Medical Society. GAO also interviewed officials from the Food and Drug Administration about the process to approve HBO2 therapy for the treatment of TBI and PTSD. GAO provided a draft of this report to DOD, VA, and the Department of Health and Human Services. Each of the departments provided technical comments, which GAO incorporated, as appropriate. GAO identified 32 peer-reviewed, published articles on research about the use of hyperbaric oxygen (HBO2) therapy to treat traumatic brain injury (TBI) and post-traumatic stress disorder (PTSD), most of which were focused solely on TBI (29 articles). The 32 articles consisted of 7 case reports (reports on the treatment of individuals), 10 literature reviews (reviews of studies), and 15 articles on interventional studies or clinical trials, which provide the strongest clinical evidence about a treatment. Three of the 15 articles on interventional studies or clinical trials focused on the safety of HBO2 therapy for treating TBI and concluded that it is safe. The other 12 articles described the effectiveness of HBO2 therapy in treating TBI. Four of these articles (two on severe TBI and two that did not specify severity) reported that HBO2 therapy was effective. The remaining eight articles focused on mild TBI—six concluded that it was not effective and two concluded that it was. The six articles that concluded HBO2 therapy was not effective in treating mild TBI were based on three studies funded by the Department of Defense (DOD) with collaboration from the Department of Veterans Affairs (VA) and others. Each of the DOD-funded studies 1) was randomized—participants were randomly assigned to clinical trial groups, 2) was double-blinded—neither researchers nor participants knew who was assigned to which group, and 3) included a sham control group—participants received a procedure that was similar to HBO2 therapy but lacked certain components of the intervention. However, there is no standard design for sham control groups in HBO2 therapy, and in each of the DOD-funded studies the approach varied. The authors of the six articles based on these studies concluded that HBO2 therapy was not effective in treating mild TBI because participants in the sham control and treatment groups had similar outcomes. Although both groups showed improvement, the researchers concluded that this was likely due to other factors, such as a placebo effect. Researchers not affiliated with the DOD-funded studies have raised concerns about whether the sham control groups received a placebo or a therapeutic treatment. In a published editorial, researchers affiliated with one of the DOD-funded studies acknowledged the challenges associated with designing a sham control group and stated that additional research would be needed to determine whether these participants actually received a therapeutic benefit. DOD officials told us that studying the long-term effects of the treatment also would help confirm whether the sham control groups' improvements should be attributed to a placebo effect. The two articles that concluded that HBO2 therapy was effective in treating mild TBI were based on studies that were designed differently than the DOD-funded studies. DOD and VA researchers told GAO that the studies related to these articles did not have the same methodological rigor as the DOD-funded studies because they did not have design features such as sham control groups, which help ensure the validity of a study's findings. The researchers for one of these two studies noted in their article that they did not use a sham control group because it was difficult to ensure that participants would receive a non-therapeutic treatment. Researchers for the other study noted in their article that a sham control group was not used because this was preliminary work, and further work would be needed to confirm the findings.
As we testified in July, time is running out for agencies and the pace needs to be accelerated if widespread systems problems are to be avoided as the Year 2000 approaches. We stressed in our testimony that the Office of Management and Budget (OMB) and key federal agencies need to move with more urgency. Among the other related issues we noted was that increased attention was required on validation and testing of Year 2000 solutions, data interfaces and exchanges, and contingency planning. OMB’s most current Year 2000 progress report on the federal government’s efforts, released last week, again demonstrates that although federal agencies are generally making progress toward achieving Year 2000 compliance, the overall pace of that progress is too slow. Based on individual agency reports, 75 percent of the agencies’ approximately 8,500 mission-critical systems remain to be repaired or replaced, and the total cost estimate has risen to $3.8 billion, up $1 billion from the previous quarterly report. According to OMB, reports of several of the agencies were disappointing; consequently, it placed agencies in one of three categories, depending upon evidence of progress. In the first category are four agencies that OMB found had “insufficient evidence of progress.” For these agencies, OMB established a “rebuttable presumption going into the Fiscal Year 1999 budget formulation process this Fall that we [OMB] will not fund requests for information technology investments unless they are directly related to fixing the year 2000 problem.” OMB’s second category contains 12 other agencies for which it cited “evidence of progress but also concerns.” These agencies were put on notice that continued funding for information technology investments would be contingent on continued progress. Finally, for the eight remaining agencies that according to OMB, appear to be making progress—and this includes VA—funding requests will be handled in the usual manner, although progress at all agencies will be reevaluated on the basis of their next quarterly reports, due November 15. We are encouraged by OMB’s statements and believe they reflect an increased urgency to address the Year 2000 issue. Further, we note that in its report, OMB states that it plans to address other issues that we raised in our July testimony. OMB emphasized that proper validation of changes was critical to success. It stated that it planned to meet with agencies over the coming months to discuss the adequacy of scheduled timetables for completing validation. OMB said it would discuss with agencies the preparedness of communications interfaces with systems external to the federal government, including those of state and local governments and the private sector. OMB asked agencies for a summary of the contingency plan for any mission-critical system that was reported behind schedule in two consecutive quarterly reports so that it could summarize such plans in future reports to the Congress. We look forward to implementation of these key activities as we continue monitoring OMB’s leadership of the federal government’s Year 2000 effort. VA is very vulnerable to the impact of the new millennium because of the large number of veterans and their dependents that it serves; this is why it is so important that VA’s systems be made compliant in time to avoid disruption to the benefits and services on which millions of Americans depend. Our past and current work at VA indicates that the Department recognizes the urgency of its task, and it has made progress. But much remains to be done if it is to avoid the widespread computer failures that unmodified systems could bring. If left uncorrected, the types of possible problems that could occur include but are not limited to late or inaccurate benefits payments, lack of patient scheduling for hospital treatments, and misinterpretation of patient data. The number of areas vulnerable to problems is vast. The Department’s June 1997 Year 2000 plan (VA Year 2000 Solutions) outlines VA’s strategy, activities, and major milestones. According to this plan and in line with OMB guidance, VA’s primary approach is to make its 11 existing mission-critical systems compliant; one, in fact, already is. Table 1 lists these systems, along with the numbers of applications they serve and the responsible VA component or office. Responsible for overseeing the Year 2000 problem at VA is its chief information officer (CIO); he is assisted by the CIOs of both VBA and VHA, by senior information technology managers in the National Cemetery System, and by staff offices at VA headquarters. VA has also designated a Year 2000 project manager, responsible for general oversight and monitoring. According to VA’s August 14, 1997, quarterly report to OMB, the Department has made progress in addressing the Year 2000 problem. As noted in the report, 1 of its 11 mission-critical systems—the one serving the National Cemetery System—is already fully compliant. Of the 10 remaining mission-critical systems and their applications, 85 percent have been assessed and 51 percent have been renovated. In addition, VA has updated its total Year 2000 cost estimate from $144 million (May 1997) to $162 million; VA’s stated reason for the increase is the need for upgrades to its commercial off-the-shelf software and hardware and more contractual support. Further, VA’s current estimate shows that it expects systems assessment to be completed by the end of next January, renovation of systems by November 1998, validation by January 1999, and implementation by October 1999—2 months earlier than VA reported in May. As we testified before the Subcommittee in June, correcting the Year 2000 problem is critical to VBA’s mission of providing benefits and services to veterans and their dependents. VBA has responded to this challenge by initiating a number of actions, including developing an agencywide plan and a Year 2000 strategy, and creating a program management organization. However, several substantial risks remain. If VBA is to avert serious disruption to its ability to disseminate benefits, it will need to strengthen its management and oversight of Year 2000-related activities. Our May 30, 1997, report contained 10 specific recommendations to the Secretary of Veterans Affairs on actions that VBA needed to take to address the Year 2000 problem. VA concurs with all 10, and is in the process of implementing them. For example, according to VBA: To strengthen its Year 2000 program management office, it has assigned oversight and coordination responsibilities for all Year 2000 activities to this office alone. It has completed inventories of data interfaces and third-party products (hardware, software, mainframes, minicomputers, operating systems, and utilities). VBA has also determined that most of its third-party products are Year 2000 compliant—98 percent of its personal computers, local area networks, minicomputers, and commercial software; and all of its imaging equipment and associated software. It has renovated half of the 157 applications that make up its six mission-critical systems. It plans to renovate the remaining applications by November 1998. While we are encouraged by these positive actions, we understand from discussions with VBA officials that key work schedules have been compressed, creating added pressure. For example, renovation of VBA’s largest and most critical applications—those necessary to the functioning of its Compensation and Pension Service—may not be completed by VBA’s target date of December 1998. Changes to these applications have had to be delayed in order to effect this year’s legislatively mandated changes and cost- of-living increases. Time is similarly short for work on the loan guaranty system, for which key phases remain to be completed. For example, the new construction and valuation application is scheduled to start in early fiscal year 1998, but it has a fail date of December 1998. This leaves VBA only slightly 1 year to design, develop, test, and implement this application. A further challenge for VBA is that it has not modified its schedule to take into account recent problems and delays in its attempts to replace an education payment system for selected reservists known as chapter 1606. Such schedules are important to ensuring that all mission-critical applications are fixed; they therefore need to be modified or updated to reflect realistic estimations of the difficulty of the work involved. In addition, although VBA has completed an inventory of 590 internal and external interfaces, as of July 31, 1997, only 26 percent of the interfaces had been assessed for compliance. VBA’s Year 2000 project manager indicated that VBA is encountering problems determining whether its external interfaces are Year 2000 compliant because external sources have not provided the necessary information. VBA also has not updated its January 1997 risk assessment to reflect the recent change in its Year 2000 strategy. Specifically, in response to concerns raised regarding its initial approach, VBA redirected its Year 2000 strategy by focusing on converting its existing benefits payment systems rather than replacing the noncompliant systems. Since risk assessment is an important prerequisite for effectively prioritizing projects and mitigating potential problems, updating the previous risk assessment to take this change into account is essential. An internal VA oversight committee, established to monitor and evaluate the progress of VBA’s Year 2000 activities, identified concerns similar to ours. Specifically, according to a member of this committee, little time remains for VBA to make the necessary modifications to its compensation and pension and loan guaranty systems, and much work remains in assessing the external interfaces for compliance. The Year 2000 challenge for VHA is enormous. As the largest centrally directed civilian health care system in the United States, VHA manages health care delivery to veterans within 22 regional areas geographically dispersed throughout the country; these areas are known as Veterans Integrated Service Networks (VISNs), and they encompass 173 VA medical centers, 376 outpatient clinics, 133 nursing homes, and 39 domiciliaries—a total of 721 facilities. These sites utilize a wide range of electronic information systems, biomedical equipment, facilities systems, and other computer-based system products. Accordingly, it is essential that each of these 22 regional health care networks thoroughly assesses and plans for ensuring Year 2000 compliance so that service delivery is not interrupted. Within VHA, the CIO has overall responsibility for planning and managing Year 2000 compliance. The CIO created a VHA Year 2000 project office, empowered to develop compliance guidance. In April 1997, this office developed a VHA plan for addressing the year 2000; the plan was approved by VA’s Under Secretary for Health on May 14 of this year. The CIOs of each of the 22 regional networks, medical facility directors, and managers have ultimate responsibility for preparing and executing their individual Year 2000 plans, including all required assessment, renovation, validation/testing, and implementation activities. According to VA’s August 14, 1997, quarterly report to OMB, VHA is in the initial stages of assessing the compliance of its two mission-critical systems—the Veterans Health Information Systems and Technology Architecture (VISTA)—formerly known as the Decentralized Hospital Computer Program (DHCP)—and the VHA corporate systems. VA also reported that of the two systems’ applications, 17 percent have been assessed and 16 percent renovated. VHA plans to complete this assessment and renovation by the end of January 1998 and July 1998, respectively. According to VA’s Year 2000 readiness review, VHA’s strategy for the national VISTA applications is to assess all 143 applications and recode as necessary. According to VHA, 34 of its 143 applications have been assessed; 33 of these 34 were eliminated as a result of the assessment. In order to effectively assess and renovate, it is necessary to understand how local facilities are using the national VISTA applications. One potential risk is that some local facilities have customized national applications, according to VA’s Year 2000 readiness review. If this is true, it is important that VHA know where applications have been changed—even in small ways—so as to ensure that they are Year 2000 compliant. Beyond customization, local facilities may purchase software add-ons to work with the national applications; here, too, these must be inventoried and Year 2000 compliance assessed. An inventory of internal and external VISTA interfaces has not yet been completed; systems developers plan to identify such interfaces when they assess each application. Should internal information be corrupted by exposure to uncorrected external interfaces through network exchanges, system crashes and/or loss of data could result. VA’s Year 2000 project manager has expressed concern that this information may not be obtainable from external sources, who have yet to inform VHA whether their interfaces are Year 2000 compliant. As with interfaces, VHA must be assured that the commercial software products it uses are Year 2000 compliant. It has completed an inventory of its commercial products, such as personal computer operating systems, office automation software, and medical applications; according to the project manager, over 3,000 software products and 1,000 software vendors have been identified. VHA plans to rely on the General Services Administration to provide it with a general list of commercial products that are Year 2000 compliant. For specialized products unique to the health care industry, VHA plans to contact manufacturers for compliance information. Physical facilities are another area of concern. According to VHA’s Year 2000 program manager, VHA has not completed an inventory of facilities-related systems and equipment such as elevators; heating, ventilating, and air conditioning equipment; lighting systems; security systems; and disaster recovery systems. Such elements are vitally important to VHA’s ability to provide high-quality health care services. VHA is working with the General Services Administration and manufacturers on this issue. Since it is often critical that medical services not be interrupted, VHA is required to have contingency plans in place in case hospital systems fail. These plans are reviewed and assessed regularly by the Joint Commission on Accreditation of Healthcare Organizations. However, such contingency plans are meant to ensure continued operation in the event of disaster; such approval does not necessarily ensure that all backup systems are Year 2000 compliant. Health care facilities depend on the reliable operation of a variety of biomedical devices—equipment that can record, process, analyze, display, or transmit medical data. Examples include computerized nuclear magnetic resonance imaging (MRI) systems, cardiac monitoring systems, cardiac defibrillators, and various tools for laboratory analysis. Such devices may depend on a computer for calibration or day-to-day operation. This computer could be either a personal computer that connects to the device from a distance or a microprocessor chip embedded within the device. In either case, the software that controls the operation of the computer may be susceptible to the Year 2000 problem. The impact could range from incorrect formatting of a printout to incorrect operation of the device, having the potential to affect patient care or safety. The risks for a specific medical device depend on the role of the device in the patient’s care and the design of the device. Although medical treatment facilities have the expertise to understand how medical devices are used, they rely on device manufacturers to analyze designs and disclose Year 2000 compliance status. As a health care provider and user of medical devices, VHA is a key stakeholder in determining compliance of such tools. Another key player is the Food and Drug Administration (FDA), in its role of protecting the public from unsafe and/or ineffective medical devices. In attempting to ascertain the potential impact of the century change on its biomedical devices, VHA on two separate occasions sent letters to manufacturers. Its first letter was sent over a period of a few days beginning June 23 of this year to equipment manufacturers identified by selected experts within VHA. In the letter, VHA inquired as to steps the manufacturer planned to take to resolve the Year 2000 issue. Out of 118 letters, VHA received 32 responses. These responses were reviewed by VHA’s medical device integrated product team, comprising internal experts from a variety of fields. On the basis of the team’s analysis, VHA sent more detailed letters asking specific questions, including whether the manufacturer provided any devices to VA that incorporate a real-time clock; if such devices were provided, whether they are Year 2000 compliant; and, for those that are not compliant, asking for model numbers, device names, and the specific impact the century change would likely have on the device. These letters were sent to about 1,600 manufacturers on September 9, 1997, with a request for responses by October 3. According to VHA, 50 responses had been received as of September 15. Product team members plan to review responses to ensure that they are categorized correctly as compliant, noncompliant, or pending; VHA will maintain a database of the manufacturers and their responses. This database will be made available to VA medical centers through the VHA intranet, although key personnel such as biomedical engineers may not have easy access to the intranet at some medical centers. The information will also be communicated to VA medical centers through monthly conference calls among engineers and communications with medical center directors. We feel that it is imperative that such results be widely disseminated; if the VHA intranet is insufficient for this task, other means should be found. FDA also recently began communicating with manufacturers. According to officials, FDA sent a letter in early July of this year to about 13,000 such manufacturers, reminding them of their responsibility to ensure that their products will not be affected by the century change. In the letter, FDA reminded manufacturers that, according to section 518 of the Federal Food, Drug, and Cosmetic Act, they are required to notify users or purchasers when FDA determines a device presents an unreasonable risk of substantial harm to public health. Although one response was received, the acting director of FDA’s Division of Electronics and Computer Science explained that it was not the agency’s intention to solicit a specific response because FDA expects manufacturers to report any problems found through normal reporting channels. FDA plans to disseminate information on any Year 2000 problems reported by manufacturers to the public through its reporting systems, such as the Medical Products Reporting Program (“MedWatch”). According to the director of FDA’s Cardiovascular Division, the agency’s strategy for helping to determine whether medical devices are Year 2000 compliant is to rely on the knowledge and experience of its resident experts. These experts, with backgrounds in electrical engineering, software engineering, and/or biomedical engineering, have reviewed the design of selected medical devices to determine whether the devices would be affected by the century change. In the case of pacemakers, for example, FDA experts have concluded that no adverse effect will result. This conclusion was based on the fact that the internal operations of pacemakers do not involve dates. The experts further said that although pacemaker settings are often changed with the assistance of a computer, which often uses dates and may be noncompliant, a trained physician is always involved in controlling the settings. A federal entity—the Year 2000 Subgroup on Biomedical Equipment—is working to coordinate the effort to obtain Year 2000 compliance status information from medical device manufacturers. This group plans to follow up on nonrespondents to questionnaires sent out by VHA, FDA, and other federal health care providers to manufacturers requesting this information. In closing, Mr. Chairman, I want to stress that while our detailed review of the VHA area is just now underway, it is clear that for VA as a whole to have all of its mission-critical systems compliant by January 1, 2000, will entail a huge, well-coordinated effort. This concludes my statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed federal progress in addressing the Year 2000 problem, focusing on: (1) action taken by the Department of Veterans Affairs (VA) as a whole; (2) steps taken by the Veterans Benefits Administration (VBA) in response to recommendations contained in a GAO report; and (3) results of its review of the Veterans Health Administration's (VHA) Year 2000 activities. GAO noted that: (1) according to VA's August 14, 1997, quarterly report to the Office of Management and Budget, the Department has made progress in addressing the Year 2000 problem; (2) as noted in the report, one of its 11 mission-critical systems--the one serving the National Cemetery System--is already fully compliant; (3) of the ten remaining mission critical systems and their applications, 85 percent have been assessed and 51 percent have been renovated; (4) in addition, VA has updated its total Year 2000 cost estimate from $144 million (May 1997) to $162 million; (5) VA's stated reason for the increase is the need for upgrades to its commercial off-the-shelf software and hardware, and more contractual support; (6) VA's current estimate shows that it expects systems assessment to be completed by the end of next January, renovation of systems by November 1998, validation by January 1999, and implementation by October 1999--2 months earlier than VA reported in May; (7) VBA has responded to the Year 2000 challenge by initiating a number of actions, including developing an agencywide plan and a Year 2000 strategy and creating a program management organization; (8) however, several substantial risks remain; (9) if VBA is to avert serious disruption of its ability to disseminate benefits, it will need to strengthen its management and oversight of Year 2000-related activities; (10) VHA is in the initial stages of assessing the compliance of its two mission-critical systems; (11) it is essential that each of VHA's 22 regional health care networks thoroughly assesses and plans for ensuring Year 2000 compliance so that service delivery is not interrupted; (12) in order to effectively assess and renovate, it is necessary to understand how local customizations, software add-ons, external interfaces, and physical facilities may affect Year 2000 compliance; (13) VHA is assessing Year 2000 impact on medical devices; and (14) while GAO's detailed review of the VHA area is just now under way, it is clear that for VA as a whole to have all of its mission-critical systems compliant by January 1, 2000, will entail a huge, well-coordinated effort.
Although default rates (loans 90 days or more past due) fell from an all- time high of 5.09 percent at the end of the fourth quarter of 2009 to 3.94 percent at the end of the fourth quarter of 2010 (a nearly 23 percent drop over the course of a year), the percentage of loans in foreclosure rose to equal the highest level in recent history at 4.63 percent (fig.1). The increase in foreclosure inventory during the latter part of 2010 may be due to issues surrounding foreclosure processing and procedures that resulted in various foreclosure moratorium initiatives. In addition, the percentage of loans that newly entered the foreclosure process in the fourth quarter of 2010 remained high at 1.27 percent, compared to 0.42 percent in the first quarter of 2005. As we reported in December 2008, Treasury has established an Office of Homeownership Preservation within the Office of Financial Stability (OFS), which administers TARP, to address the issues of preserving homeownership and protecting home values. On February 18, 2009, Treasury announced the broad outline of the MHA program. The largest component of MHA was the HAMP first-lien modification program, which was intended to help eligible homeowners stay in their homes and avoid potential foreclosure. Treasury intended that up to $75 billion would be committed to MHA ($50 billion under TARP and $25 billion from Fannie Mae and Freddie Mac) to prevent avoidable foreclosures for up to 3 to 4 million borrowers who were struggling to pay their mortgages. According to Treasury officials, up to $50 billion in TARP funds were to be used to encourage the modification of mortgages that financial institutions owned and held in their portfolios (whole loans) and mortgages held in private- label securitization trusts. Fannie Mae and Freddie Mac together were expected to provide up to an additional $25 billion from their own balance sheets to encourage servicers and borrowers to modify or refinance loans that those two Government Sponsored Enterprises (GSE) guaranteed. Only financial institutions that voluntarily signed a Commitment to Purchase Financial Instrument and Servicer Participation Agreement (SPA) with respect to their non-GSE loans are eligible to receive TARP financial incentives under the MHA program. HAMP first-lien modifications are available to qualified borrowers who occupied their properties as their primary residence, who had taken out their loans on or before January 1, 2009, and whose first-lien mortgage payment was more than 31 percent of their gross monthly income (calculated using the front-end debt-to-income ratio (DTI)). Only single- family properties (one-four units) with mortgages no greater than $729,750 for a one-unit property were eligible. The HAMP first-lien modification program has four main features: 1. Cost sharing. Mortgage holders/investors are required to take the first loss in reducing the borrower’s monthly payments to no more than 38 percent of the borrower’s income. For non-GSE loans, Treasury then uses TARP funds to match further reductions on a dollar-for-dollar basis, down to the target of 31 percent of the borrower’s gross monthly income. The modified monthly payment is fixed for 5 years or until the loan is paid off, whichever is earlier, as long as the borrower remains in good standing with the program. After 5 years, investors no longer receive payments for cost sharing, and the borrower’s interest rate may increase by 1 percent a year to a cap that equals the Freddie Mac rate for 30-year fixed rate loans as of the date that the modification agreement was prepared. The borrower’s payment would increase to accommodate the increase in the interest rate, but the interest rate and monthly payments would then be fixed for the remainder of the loan. 2. Standardized net present value (NPV) model. The NPV model compares expected cash flows from a modified loan to the same loan with no modification, using certain assumptions. If the expected investor cash flow with a modification is greater than the expected cash flow without a modification, the loan servicer is required to modify the loan. According to Treasury, the NPV model increases mortgage investors’ confidence that modifications under HAMP are in their best financial interests and helps ensure that borrowers are treated consistently under the program by providing an externally derived objective standard for all loan servicers to follow. 3. Standardized waterfall. Servicers must follow a sequential modification process to reduce payments to as close to 31 percent of gross monthly income as possible. Servicers must first capitalize accrued interest and certain expenses paid to third parties and add this amount to the loan balance (principal) amount. Next, the interest rate must be reduced in increments of one-eighth of 1 percent until the 31 percent DTI target is reached, but servicers may not reduce interest rates below 2 percent. If the interest rate reduction does not result in a DTI ratio of 31 percent, servicers must then extend the maturity and/or amortization period of the loan in 1-month increments up to 40 years. Finally, if the target DTI ratio is still not reached, the servicer must forbear, or defer, principal until the payment is reduced to the 31 percent target. Servicers may also forgive mortgage principal at any step of the process to achieve the target monthly payment ratio of 31 percent, provided that the investor allows principal reduction. 4. Incentive payment structure. Treasury uses TARP funds to provide both one-time and ongoing incentives (“pay-for-success”) for up to 5 years to non-GSE loan servicers, mortgage investors, and borrowers. These incentives are designed to increase the likelihood that the program will produce successful modifications over the long term and help cover the servicers’ and investors’ costs for making the modifications. Borrowers must also demonstrate their ability to pay the modified amount by successfully completing a trial period of at least 90 days before a loan is permanently modified and any government payments are made under HAMP. Treasury has entered into agreements with Fannie Mae and Freddie Mac to act as its financial agents for MHA. With respect to Freddie Mac, these responsibilities are carried out by a separate division of that entity. Fannie Mae serves as the MHA program administrator and is responsible for developing and administering program operations including registering servicers and executing participation agreements with and collecting data from them, as well as providing ongoing servicer training and support. Within Freddie Mac, the MHA-Compliance (MHA-C) team is the MHA compliance agent and is responsible for assessing servicers’ compliance with non-GSE program guidelines, including conducting on-site and remote servicer loan file reviews and audits. Initially, only servicers who signed a SPA prior to December 31, 2009, were eligible to participate in MHA. Subsequently, the Secretary of the Treasury exercised the authority granted under the Emergency Economic Stabilization Act of 2008 to extend TARP’s obligation authority to October 3, 2010, which allowed servicers to continue to sign SPAs to participate in MHA until that time. As of December 31, 2010, there were a total of 143 active servicers. Through January 2011, $29.9 billion in TARP funds had been committed to these servicers for modification of non-GSE loans. Based on the MHA Servicer Performance Report through January 2011, nearly 1.8 million HAMP trial modifications had been offered to borrowers of GSE and non-GSE loans as of the end of January 2011, and nearly 1.5 million of these had begun HAMP trial modifications. Of the trial modifications begun, approximately 145,000 were in active trial modifications, roughly 539,000 were in active permanent modifications, roughly 740,000 trial modifications had been canceled, and roughly 68,000 permanent modifications had been canceled. Recently, the number of new trial and permanent modifications started each month has declined (fig. 2). As of December 31, 2010, $1 billion in TARP funds had been disbursed for TARP-funded housing programs, of which $840 million was disbursed for HAMP-related activity. Treasury has recently implemented programs to reduce or eliminate payments on second-lien mortgages, provide incentives for the use of short sales or deeds-in-lieu as alternatives to foreclosure, and provide incentives for the forgiveness of principal for borrowers whose homes are worth significantly less than their mortgage balances. However, as of December 2010, reported activity under these three programs had been limited. 2MP was announced in March 2009, and had disbursed $2.9 million out of nearly $133 million allocated to the program by the end of December 2010. In part, the limited activity appears to be the result of problems that servicers have experienced using the database that Treasury required to identify second-lien mortgages eligible for modification. Treasury has taken some steps to address these challenges, but could take further action to ensure that borrowers are aware of their potential eligibility for the program. HAFA was announced in March 2009 and had disbursed $9.5 million out of $4.1 billion allocated to the program by the end of December 2010. Restrictive program requirements—for example, that borrowers be evaluated for a HAMP first-lien modification before being evaluated for HAFA, appear to have limited program activity to date. Treasury has taken steps to revise program guidelines, but it remains to be seen the extent to which these actions will result in increased program activity. PRA was announced in March 2010 and Treasury had not reported activity as of December 2010 for this $2 billion program. Mortgage investors and others have cited concerns that the voluntary nature of the program and transparency issues, including concerns about the extent of reporting on PRA activity, may limit the extent to which servicers implement PRA. Treasury has not yet implemented our June 2010 recommendation that it report activity under PRA, including the extent to which servicers determined that principal reduction was beneficial to investors but did not offer it, to ensure transparency in the implementation of this program feature across servicers. Further, Treasury has not incorporated key lessons learned from implementation challenges it faced with the first-lien program. Similar to the first-lien modification program, Treasury has not established effective performance measures for these three programs, including goals for the number of borrowers it expects to help. As a result, determining the progress and success of these programs in preserving homeownership and protecting home values will be difficult. Under 2MP, Treasury provides incentives for second-lien holders to modify or extinguish a second-lien mortgage when a HAMP modification has been initiated on the first-lien mortgage for the same property. Treasury requires servicers who agree to participate in the 2MP program to offer to modify the borrower’s second lien according to a defined protocol when the borrower’s first lien is modified under HAMP. That protocol provides for a lump-sum payment from Treasury in exchange for full extinguishment of the second lien or a reduced lump-sum payment for a partial extinguishment and modification of the borrower’s remaining second lien. The modification steps for 2MP are similar to those for HAMP first-lien modifications, with the interest rate generally reduced to 1 percent and the loan term generally extended to match the term of the HAMP-modified first lien. In addition, if the HAMP modification on the first lien included principal forgiveness, the 2MP modification must forgive principal in the same proportion. Servicers were required to sign specific agreements to participate in 2MP. As of November 2010, 17 servicers were participating in the program, covering nearly two-thirds of the second-lien mortgage market. According to Treasury, 2MP is needed to create a comprehensive solution for borrowers struggling to make their mortgage payments, but Treasury officials we interviewed told us that the pace of 2MP modifications had been slow. Of the six servicers we contacted, five had signed 2MP participation agreements and represented the majority of potential second liens covered by servicers participating in the program. Only one of these five servicers had begun 2MP modifications as of the date we collected information from these servicers—over 18 months after the program was first announced by Treasury. This servicer reported that it had started 1,334 second-lien modifications. As of January 2011, Treasury had not yet begun reporting activity under 2MP. According to servicers and Treasury officials, the primary reason for the slow implementation of 2MP has been challenges in obtaining accurate matches of first and second liens from the data vendor required by Treasury. Treasury’s 2MP guidelines specify that in order for a second lien to be modified under 2MP, the corresponding first lien must first have been modified under the HAMP first-lien modification program. Fannie Mae, as the MHA program administrator, has contracted with a mortgage loan data vendor—Lender Processing Services (LPS)—to develop a database that would inform second-lien servicers when the corresponding first lien had been modified under HAMP. LPS was also the data vendor used by Fannie Mae to process the loan level data reported by servicers for the HAMP first-lien program. Under 2MP, participating servicers agree to provide LPS with information regarding all eligible second liens they serviced. LPS, in turn, provides participating 2MP servicers with data on second liens that have had the borrowers’ corresponding first-lien mortgages modified under the HAMP program. However, the five participating 2MP servicers we spoke with all expressed concerns about the completeness or accuracy of LPS’ data. In particular, they noted that differences in the spelling of addresses—for example, in abbreviations or spacing—could prevent LPS from finding matches between first and second liens. Additionally, another servicer reported that first-lien data could be incorrectly reported in LPS—for example, in one case, a borrower was incorrectly reported as not in good standing and, subsequently, was reported as canceled from HAMP. This mistake prevented the borrower’s first and second liens from being matched, even though the borrower was in good standing and eligible for 2MP. Treasury has also acknowledged that an inability to identify first- and second-lien matches poses a potential risk to the successful implementation of 2MP. Initial 2MP guidelines stated that servicers could not offer a second-lien modification without a confirmation of a match from LPS, even if they serviced both first and second liens on the same property and, thus, would know if the first lien had been modified under HAMP. In November 2010 Treasury provided updated program guidance that revised the match requirement if servicers serviced both the first and second lien on a property. According to these updated guidelines, servicers can offer a 2MP modification when they identify a first- and second-lien match within their own portfolio or if they have evidence of the existence of a corresponding first lien, even if the LPS database has not identified it. While this change may enable more 2MP modifications, Treasury did not release this guidance until after participating servicers had already begun implementing 2MP, more than a year after the program’s guidelines were first announced in August 2009. If they do not service both liens, second-lien servicers must rely on LPS for matching data or obtain sufficient documentation of the HAMP first-lien modification to identify the match. If the matching data provided by LPS is not accurate, it is possible that eligible borrowers will not receive second- lien modifications. Treasury noted that there are no standard data definitions in the servicing industry, making it difficult to match these data across servicers. To address some of the concerns about inaccurate and incomplete matches, Treasury officials told us they worked with LPS to change the matching protocols. Now LPS provides 2MP servicers with a list of confirmed address matches and a separate list of probable matches based only on loan number and zip code. Treasury told us that it would issue additional guidance for handling probable matches, but added that servicers would be responsible for confirming probable matches with LPS. Treasury does not require first-lien servicers to check credit reports to determine if borrowers whose first liens they modified also had second liens, and if so, the identity of the second-lien servicer. One servicer noted that credit reports did not always have complete and reliable information. In addition, Treasury does not require first-lien servicers to inform borrowers about their potential eligibility for the second-lien program. Therefore, borrowers may be unaware that their second lien could be modified and unlikely to inquire with their second-lien servicers about a second-lien modification. Any gaps in the awareness of 2MP could contribute to delays in modifying eligible second-lien mortgages or missed opportunities altogether. Additionally, any delays or omissions increase the likelihood that the borrower with an eligible second lien may not be able to maintain the required monthly reduced payments on the modified first- and unmodified second-lien mortgages and ultimately redefault on their HAMP first-lien modification. Under HAFA, Treasury provides incentives for short sales and deeds-in- lieu of foreclosure as alternatives to foreclosure for borrowers who are unable or unwilling to complete the HAMP first-lien modification process. Borrowers are eligible for relocation assistance of $3,000 and servicers receive a $1,500 incentive for completing a short sale or deed-in- lieu of foreclosure. In addition, investors are paid up to $2,000 for allowing short-sale proceeds to be distributed to subordinate lien holders. Servicers who participate in the HAMP first-lien modification program are required to evaluate certain borrowers for HAFA—those whom they cannot approve for HAMP because, for example, they do not pass the NPV test or have investors that prohibit modifications; those who do not accept a HAMP trial modification; and those who default on a HAMP modification. All six of the large MHA servicers we spoke with identified extensive program requirements as reasons for the slow implementation of the program, including the requirement in the initial guidance that borrowers first be evaluated for a HAMP first-lien modification. Restrictive short-sale requirements, and a requirement that mortgage insurers waive certain rights may have also contributed to the limited activity under HAFA. As a result, they said they did not expect HAFA to increase their overall number of short sales and deeds-in-lieu. Some of the program requirements identified by servicers as a reason for the slow implementation of the program were recently addressed by Treasury’s December 28, 2010, revisions to its HAFA guidelines. Borrowers had to first be evaluated for HAMP. According to Treasury’s initial guidelines, borrowers were to be evaluated for a HAMP first-lien modification before being considered for HAFA, even borrowers who specifically requested a short sale or deed-in-lieu rather than a modification. As such, borrowers interested in HAFA had to submit all income and other documentation required for a HAMP first-lien modification. According to servicers we interviewed, this requirement was more stringent than most proprietary short-sale requirements, and borrowers may have had difficulty providing all of the documentation required. For example, one servicer told us that it evaluated borrowers for proprietary short sales on the basis of the value of the property and the borrower’s hardship and that income documentation was not required. Additionally, a HAMP evaluation may add extra time to the short-sale process. In cases where a borrower had already identified a potential buyer before executing a short-sale agreement with the servicer, the additional time required for a HAMP first-lien evaluation may have dissuaded the buyer from purchasing the property. In response to this concern, Treasury released updated HAFA guidance on December 28, 2010, to no longer require servicers to document and verify a borrower’s financial information to be eligible for HAFA. The updated guidance requires servicers to notify borrowers who request a short sale before they have been evaluated for HAMP about the availability of HAMP, but no longer requires the servicer to complete a HAMP evaluation before considering the borrower for HAFA, especially in circumstances where the borrower already has a purchaser for the property. As a result, borrowers who specifically request a short sale or deed-in-lieu can be considered for HAFA at the start of the HAMP evaluation process, rather than having to wait until the completion of the HAMP evaluation process. Restrictive short-sale requirements. According to servicers we spoke with, some HAFA short-sale requirements, such as occupancy requirements, may have been too restrictive. Specifically, one servicer cited as too restrictive the requirement in the initial guidelines that a property not be vacant for more than 90 days prior to the date of the short- sale agreement, and that if it is vacant, it is because the borrower relocated at least 100 miles away to accept new employment. To address this concern, Treasury issued updated guidance in December 2010 which extended the allowed vacancy period from 90 days to 12 months and eliminated the requirement that the borrower moved to accept employment, but added a requirement that the borrower had not purchased other residential property within the prior 12 months. Owner- occupancy restrictions may also limit the number of HAFA short sales and deeds-in-lieu. One servicer noted that many of the short sales it completed outside of HAFA were for nonowner-occupied properties, which may include second homes or commercial properties. However, HAFA offers alternatives to foreclosure only for eligible loans under HAMP, which is intended for a property serving as a borrower’s principal residence. Waiving of rights by mortgage insurers to collect additional sums. According to Treasury guidelines, “a mortgage loan does not qualify for HAFA unless the mortgage insurer waives any right to collect additional sums (cash contribution or a promissory note) from the borrower.” Some servicers noted that this requirement had prevented some HAFA short sales from being completed due to difficulties in obtaining approval for HAFA short sales from mortgage insurers. Lenders frequently require mortgage insurance for loans that exceed 80 percent of the appraised value of the property at the time of origination. Under a short-sale scenario, the mortgage insurance company could be responsible for paying the mortgage holder or investor for all or part of the losses incurred under the short sale depending upon the coverage agreement and proceeds from the sale. Mortgage insurance representatives we spoke with indicated that while they supported HAFA participation, they felt that mortgage insurers should not have to waive their rights to collect additional sums if borrowers had some ability to pay them. These representatives told us that they had not seen many requests for approvals of HAFA foreclosure alternatives, so they did not believe this requirement was a key impediment for HAFA. However, they agreed that because servicers did not know whether mortgage insurers would agree to waive their rights, the requirement could make it more difficult to solicit borrowers for HAFA. To minimize the impact of this requirement, one mortgage insurance representative noted that his company commits to responding to servicers within 48 hours with a decision about whether the mortgage insurance company agrees to forego a contribution from the borrower. We plan to continue to monitor the progress of the HAFA program, including the impact of Treasury’s December 2010 revisions to its HAFA guidelines as well as the other program requirements identified by servicers as contributing to the slow implementation of the program, as part of our ongoing oversight of the performance of TARP. PRA provides financial incentives to investors who agree to forgive principal for borrowers whose homes are worth significantly less than the remaining amounts owed under their first-lien mortgage loans. Treasury’s PRA guidelines require servicers to consider principal forgiveness for any HAMP-eligible borrowers with MLTV greater than 115 percent, using both the standard waterfall and an alternative. While servicers must consider borrowers for principal forgiveness, they are not required to offer it, even if the NPV value to modify the loan is higher when principal is forgiven. If they choose to offer forgiveness, servicers must reduce the balance borrowers owe on their mortgages in increments over 3 years, but only if the borrowers remain current on their payments. Servicers must establish written policies to Treasury detailing when principal forgiveness will be offered. According to Treasury, a survey of the 20 largest servicers indicates that 13 servicers are planning to offer principal reduction to some extent. Of the six servicers we spoke with, three said that they planned to offer principal reduction under the program in all cases in which the NPV was higher with PRA, unless investor restrictions prevented it. As of October 2010, one of these three servicers had begun HAMP trial modifications with PRA, another had begun implementation of PRA but had not yet made trial modification offers with PRA, and the third servicer had not yet completed implementation of the program. The three remaining servicers we spoke with said they would limit the conditions under which they would offer principal forgiveness under the program. One servicer offered PRA only for adjustable-rate mortgage loans, subprime loans, and 2-year hybrid loans, and the other had developed a “second look” process for reviewing loans that had a higher NPV result with principal forgiveness. This servicer reevaluated these loans using its internal estimates of default rates and did not forgive principal unless its own estimates indicated a higher NPV with forgiveness. As a result, only 15 to 25 percent of those who otherwise would have received principal forgiveness will receive it after this “second look” process, according to this servicer. The third servicer said it would not offer PRA for loans that had mortgage insurance, noting that mortgage insurers typically took the first loss on a loan and the PRA would alter that equation with the investor absorbing the full amount of loss associated with the principal reduction. Four of the six servicers we contacted told us that investor restrictions against principal forgiveness would not limit their ability to offer principal reduction. However, one servicer noted that about half the loans it serviced had investor restrictions against principal forgiveness. Another servicer noted that a material number of its servicing agreements with investors prohibited principal forgiveness. Mortgage investors we spoke with expressed concern about PRA’s design and transparency. In particular, they expressed concern that because the HAMP NPV model did not use an LTV that reflected both the first and second liens (combined LTV), the model might not reflect an accurate NPV result. That is, the NPV model might understate the likelihood of redefault if it did not use the combined LTV. As a result, investors face the prospect of forgiving principal without knowing the true redefault risk. Further, although the purpose of PRA is to address negative equity, not taking the combined LTV into account would underestimate the population of underwater borrowers since it would not account for any associated second liens. In addition, under PRA, servicers must forgive principal on the second lien in the same proportion as the principal forgiven on the first lien. However, mortgage investors expressed concern about limited transparency into whether servicers were forgiving principal on the second lien. Additionally, SIGTARP recommended in July 2010 that Treasury reevaluate the voluntary nature of the program and consider changes to ensure the consistent treatment of similarly situated borrowers. According to Treasury, servicers began reporting PRA activity in January 2011 for trial and permanent modifications through December 31, but it is still unclear what level of program detail Treasury will publicly report. We recommended in June 2010 that Treasury report activity under PRA, including the extent to which servicers determined that principal reduction was beneficial to mortgage investors but did not offer it, to ensure transparency in the implementation of this program. Treasury officials told us they would report PRA activity at the servicer level once the data were available. We plan to continue to monitor Treasury’s reporting of PRA and other TARP-funded housing programs. In our June 2010 report, we pointed out that it was important that Treasury incorporate lessons learned from the challenges experienced with the HAMP first-lien modification program into the design and implementation of the newer MHA-funded programs. In particular, we noted that it would be important for Treasury to expeditiously develop and implement these new programs (including 2MP, HAFA, and PRA) while also developing sufficient program planning and implementation capacity, including providing program policies and guidance, hiring needed staff, and ensuring that servicers are able to meet program requirements. Treasury officials said they solicited input from servicers and investors when designing 2MP, PRA, and HAFA, and have begun to perform readiness reviews for these servicers. However, servicers have cited challenges with changing guidance under these programs. We also noted that Treasury needed to implement appropriate risk assessments and meaningful performance measures in accordance with standards for effective program management. However, Treasury has not completed program-specific risk assessments, nor has it developed performance measures to hold itself and servicers accountable for these TARP-funded housing programs or finalized specific actions it could take in the event servicers fail to meet program requirements. Program planning and implementation capacity. Treasury has provided servicers with some guidance on the new programs, but some servicers said that ongoing changes to the guidelines have presented challenges. In June 2010, we noted that effective program planning included having complete policies, guidelines, and procedures in place prior to program implementation. Treasury published initial guidance for 2MP, HAFA, and PRA prior to the dates these programs were effective, and some servicers indicated that implementation of these newer programs was smoother than it was with the first-lien modification program (see fig. 3). However, other servicers indicated that initial program guidance was unclear and that additional guidance was issued late in the implementation process. For example, while Treasury first announced the 2MP program in March 2009, it did not publish specific 2MP guidelines until August 2009 and then issued revisions to the guidelines in March 2010, the first month of official implementation, with revisions in June 2010 and again in November 2010. According to the servicers we contacted, ongoing program revisions presented challenges such as needing to retrain staff and, in some cases, delayed program implementation. Treasury officials noted that issuing additional guidance improves the program and is often necessary as circumstances change. Servicers also reported that while initial guidance for PRA was issued before the effective date of the program, Treasury did not issue guidance specific to the NPV 4.0 model until October 1, 2010, the date PRA became effective. As a result, servicers told us that there was insufficient time to update internal servicing systems in time to implement PRA as of its effective date. Treasury has also not completed a needed workforce assessment to determine whether it has enough staff to successfully implement the new program. In July 2009, we recommended that Treasury place a high priority on fully staffing vacancies in its Homeownership Preservation Office (HPO), the office within Treasury responsible for MHA governance, and fill all necessary positions. According to Treasury officials, each director within HPO conducts ongoing informal assessments of staffing needs, and Treasury has recently added two positions in marketing and communications, as well as two additional staff to address policies regarding the borrower complaint process. In addition, two additional staff positions to support the borrower complaint resolution process have recently been approved by the staffing board. HPO has also named a Deputy Chief. In addition, Treasury officials told us that Fannie Mae and Freddie Mac, Treasury’s financial agents for MHA, had doubled the number of staff devoted to these functions as the complexity of MHA has increased. However, as of December 2010, Treasury had not conducted a formal workforce assessment of HPO, despite the addition of the new MHA programs, 2MP, HAFA, and PRA. As we noted in July 2009, given the importance of HPO’s role in monitoring the financial agents, servicers, and other entities involved in the $45.6 billion TARP-funded housing programs, having enough staff with appropriate skills is essential to governing the program effectively. Servicers have not demonstrated full capacity to effectively carry out these programs. Treasury has previously stated that the implementation of the HAMP first-lien program was hindered by the lack of capacity of servicers to implement all of the requirements of the program. According to Treasury, Fannie Mae has conducted program-specific readiness reviews for the top 20 large servicers for HAFA and PRA, including all 17 servicers participating in 2MP. These reviews assess servicers’ operational readiness, including developing key controls to support new programs, technology readiness, training readiness, as well as staffing resources and program processes and documentation. According to Treasury officials, 5 servicers have completed readiness reviews for 2MP, and 5 additional servicers were scheduled to be surveyed in January 2011; 19 servicers have completed these reviews for HAFA; and 18 servicers have completed these reviews for PRA. According to Treasury’s summary of these reviews, a large majority of servicers completing these readiness reviews did not provide all documentation required to demonstrate that the key tasks needed to support these programs were in place at the time of the review. Of those that had complete reviews, 4 had provided all required documents for HAFA and 3 had provided all required documents for PRA. None of the servicers provided all required documents for 2MP. Treasury notes that it relies on Fannie Mae to monitor program readiness and that MHA-C reviews all programs as part of its on-site reviews. Nonetheless, it is unclear what actions Treasury has taken to ensure that the servicers who did not submit the required documentation have the capacity to effectively implement the programs, making less certain the ability of these servicers to fully participate in offering troubled homeowners second-lien modifications, principal reduction, and foreclosure alternatives. Meaningful performance measures and remedies. As we also reported in June 2010, Treasury must establish specific and relevant performance measures that will enable it to evaluate the program’s success against stated goals in order to hold itself and servicers accountable for these TARP-funded programs. While Treasury has established program estimates of the expected funding levels for 2MP, HAFA, and PRA programs, it has not fully developed specific and quantifiable servicer- based performance measures or benchmarks to determine the success of 2MP, HAFA, and PRA, including goals for the number of homeowners these programs are expected to help. Treasury officials told us that they were using the amounts of TARP funds allocated to MHA servicers to determine estimated participation rates, but this estimate is adjusted on a quarterly basis and according to Treasury, is not the best measure for holding servicers accountable. Treasury officials stated that when data became available they would assess certain aspects of program performance—for example, they noted that Treasury planned to assess the redefault rates of modifications that received PRA or 2MP, compared with those that did not. However, Treasury has not set benchmarks, or goals, for these performance measures, as we recommended in June 2010. In addition, Treasury has not stated how it will use these assessments to hold servicers accountable for their performance or what remedial actions it will take in cases where individual servicers are not performing as expected in these programs. We continue to believe that Treasury should take steps to establish benchmarks that can be used to hold servicers accountable for their performance. Appropriate risk assessment. We previously reported that agencies must identify the risks that could impede the success of new programs and determine appropriate methods of mitigating these risks. In particular, we highlighted the need for Treasury to develop appropriate controls to mitigate those risks before the programs’ implementation dates. Although Treasury has not systematically assessed risks at the program level, Treasury officials told us they had identified several risks associated with 2MP, HAFA, and PRA and specified ways to mitigate these risks, and added they were planning to begin new risk assessments in January 2011 that would be completed by June 2011. According to Treasury officials, this new round of risk assessments will include 2MP, HAFA, and PRA, but the programs will not be evaluated individually. In addition, Treasury has not yet fully addressed all program-specific risks. As we have seen, Treasury has acknowledged the risk that the matching database for 2MP may not identify all first liens modified under HAMP. While Treasury began addressing this issue in updated guidance released in November 2010, it cannot yet determine whether all borrowers eligible for 2MP are being identified and considered for second-lien modifications. Treasury has also acknowledged several potential risks with all types of short-sale transactions, including HAFA transactions. According to Treasury officials, these risks include those arising from sales to allied parties, side agreements, and rapid resales. For example, Treasury officials noted a short-sale purchaser could be inappropriately related to the servicer, allowing the short sale to be inappropriately engineered to generate extra compensation for one or both parties. Treasury states that HAFA includes requirements to mitigate these risks, such as requiring arms-length transactions. According to Treasury officials, MHA-C, the group within Freddie Mac that acts as Treasury’s financial agent for MHA compliance activity, is also in the process of developing compliance procedures to address these risks. Further, Treasury has identified several potential risks with PRA, including servicer noncompliance with PRA requirements, moral hazard (the risk that borrowers would default on their mortgages to receive principal reduction when they otherwise would not have), and low program participation. According to Treasury officials, these risks will be mitigated through regular compliance reviews, servicer reporting of NPV results both with and without PRA, and other program requirements. For example, to guard against moral hazard, Treasury requires that borrowers be experiencing hardship and that servicers forgive the principal over 3 years only if the borrower remains current on the modified payments. However, low program participation may continue to be a risk for PRA, despite the initial participation plans of several of the large servicers. While Treasury officials told us they plan to monitor the reasonableness of the extent of principal forgiveness on a servicer-specific basis, we continue to believe that due to the voluntary nature of the program, Treasury will need to ensure full and accurate servicer-specific reporting of program activity for future assessments of the extent to which servicers are offering PRA when the NPV is higher with principal forgiveness, as we recommended in June 2010. We plan to continue to monitor and report on Treasury’s risk assessment and control activities for MHA programs as part of our ongoing oversight of Treasury’s use of TARP funds to preserve homeownership and protect property values. Our analysis of Treasury’s HAMP data through September 30, 2010, indicated that borrowers who entered into trial modifications or received permanent modifications continued to have elevated levels of debt, as evidenced by the median back-end DTI for these two groups (55 and 57 percent, respectively). Borrowers who received a trial modification based on stated (unverified) income—a practice that Treasury no lo nger permits—were the most likely to have their trial modifications canceled, and borrowers who were the most delinquent on their mortgage payments at the time of applying for a loan modification were the most likely to redefault on their modifications. While the data Treasury collected from the servicers provided these and other insights into the characteristics of borrowers helped under the program, some data were missing and some information was inaccurate, preventing certain types of analyses of HAMP borrowers. For example, race and ethnicity information was not available for a significant portion of borrowers. In addition, Treasury’s data on borrowers’ LTV ratios at the time of modification ranged from 0 to 999, with 1 percent of non-GSE borrowers in active permanent modifications reporting ratios over 400 percent, implying that some borrowers who received HAMP modifications did not have a mortgage, and others had loan amounts more than 4 times the value of their homes. Treasury said that it and Fannie Mae were continuing to refine and strengthen data quality checks and that the data would improve over time. According to Treasury’s HAMP data, 88,903 non-GSE borrowers were in active HAMP trial modifications and 205,449 borrowers were in permanent modifications as of the end of September 2010. These borrowers generally cited a reduction in income as their primary reason for hardship when applying for HAMP modifications. Over half of borrowers cited a “curtailment of income,” such as a change to a lower-paying job, as the primary reason they were experiencing financial hardship (56 percent and 53 percent of those in active trial and permanent modifications, respectively). However, only 5 percent of borrowers in each of these groups cited unemployment as their primary reason for hardship. Borrowers in trial and permanent modifications through September 2010 also had high levels of debt prior to modification—median front-end DTI ratios of 45 and 46 percent, and back-end DTI ratios of 72 and 76 percent, respectively. Even after modification, these borrowers continued to have high debt levels (median back-end DTI ratios of 55 and 57 percent for those in trial and permanent modifications, respectively). Treasury has defined a high back-end DTI to be 55 percent, and has required borrowers with total postmodification debt at this level to obtain counseling. In addition, borrowers in trial and permanent modifications tended to be “underwater,” with median mark-to-market LTV ratios of 123 percent and 128 percent, respectively. Borrowers who were unsuccessful in HAMP modifications, either because they were canceled from a trial modification or because they redefaulted from permanent modifications, shared several of these characteristics, including having high levels of debt and being “underwater” on their mortgages. However, some characteristics appeared to increase the likelihood that a borrower would be canceled from a trial modification. Holding other potential factors constant, the following factors increased the likelihood that a borrower would be canceled from a trial modification: Use of Stated Income. Borrowers who received a trial modification based on stated income were 52 percent more likely to be canceled from trial modifications than those who started a trial modification based on documented income. In some cases, borrowers who received trial modifications based on stated income were not able to or failed to provide proof of their income or other information for conversion to permanent modification. In other cases, borrowers may have submitted the required documentation but the servicer lost the documents. Over one-third of the 396 housing counselors who responded to our survey identified servicers losing documentation as the most common challenge that borrowers have faced in providing the required documentation for a permanent modification. In December 2010, the Congressional Oversight Panel also reported that Treasury has failed to hold loan servicers accountable when they have repeatedly lost borrowers’ paperwork. Length of Trial Period. Borrowers who were in trial modification periods for fewer than 4 months were about 58 percent more likely to have their trial modifications canceled than borrowers in longer trial periods. This finding may indicate that borrowers who default on their trial modifications will do so earlier in the process rather than later. Delinquency Level at Time of Modification. Borrowers who were 60 or 90 days or more delinquent at the time of their trial modifications were 6 and 9 percent more likely to have trial modifications canceled, respectively, compared with borrowers who were not yet delinquent at the time of their trial modifications. Treasury has acknowledged the importance of reaching borrowers before they are seriously delinquent by requiring servicers to evaluate borrowers still current on their mortgages for imminent default, but as we noted in June 2010, this group of borrowers may be defined differently by different servicers. In addition, most borrowers who received HAMP were delinquent on their mortgages at the time of modification—as of September 30, 2010, 83 percent of those who had begun trial or permanent modifications were at least 60 days delinquent on their mortgages. According to our analysis, there were also several factors that lowered the likelihood of trial cancellations, although the effect was generally smaller than the factors that increased the likelihood of being canceled. High MLTV Ratio. Borrowers who had high MLTV ratios (above 120 percent) were less likely to be canceled from a trial modification compared to those with MLTV ratios at or below 80 percent. That is, loans with a MLTV between 120 and 140 percent were 7 percent less likely to be canceled, while loans with an MLTV of more than 140 percent were 8 percent less likely to be canceled. Amount of Principal or Payment Reduction: While only about 2 percent of borrowers had received principal forgiveness as of September 30, 2010, borrowers who received principal forgiveness of at least 1 percent of their total loan balance were less likely to be canceled from trial modifications, compared with those who did not receive principal forgiveness. In addition, larger monthly payment reductions lowered the likelihood that a trial modification would be canceled. For example, our analysis showed that borrowers who received a principal and interest payment reduction of least 10 percent were less likely to be canceled from their trial modifications than borrowers who received a payment reduction of less than 10 percent or who had an increase in payments. Figure 4 illustrates the extent to which certain factors increase or decrease likelihood of borrowers being canceled from HAMP trial modification. See appendix II for further details on our analysis of factors affecting the likelihood of trial modification cancellation. In addition, our initial observations of over 15,000 non-GSE borrowers who had redefaulted from permanent HAMP modifications through September 2010 indicated that these borrowers differed from those in active permanent modifications in several respects. Specifically, non-GSE borrowers who redefaulted on their HAMP permanent modifications tended to have the following characteristics higher levels of delinquency at the time of trial modification evaluation (median delinquency of 8 months compared to 5 months for those still in active permanent modifications); lower credit scores, although borrowers current on their HAMP-modified payments also had low median credit scores (525 and 552, respectively); lower median percentage of payment reduction compared with those who were still current in their permanent modifications (24 percent compared with 33 percent for those who were still current in their permanent modifications); and lower levels of debt before modification than borrowers who did not redefault (median front-end DTI ratio of 41 percent prior to modification compared to 46 percent front-end DTI ratio for those still current in their permanent modifications)—these borrowers likely did not receive as much of a payment reduction from the modification due to lower levels of debt to begin with. These results were largely consistent with information that the Federal Deposit Insurance Corporation (FDIC) released on the performance of its IndyMac loan modifications. For example, FDIC found that borrowers’ delinquency status prior to loan modification correlated directly with redefault rates after modification, with a 1-year redefault rate of roughly 25 percent for borrowers who were 2 months delinquent at the time of modification compared to a nearly 50 percent redefault rate for those who were more than 6 months delinquent at the time of modification. FDIC also reported that the redefault rates for its IndyMac modifications declined markedly with larger reductions in monthly payments. Treasury’s data on HAMP provide important information and insights on characteristics of borrowers who are in trial and permanent modification, who have been canceled from trial modifications, and who have redefaulted from permanent modifications. However, Treasury’s database contained information that was inaccurate or inconsistent, and Treasury does not collect information on all borrowers who are denied HAMP modifications. For example, Treasury’s data on borrowers’ LTV ratios at the time of modification ranged from 0 to 999, with 1 percent of non-GSE borrowers in active permanent modifications reporting ratios over 400 percent, implying that some borrowers who received HAMP modifications did not have a mortgage, and others had loan amounts more than 4 times the value of their homes. Some data elements also included internal inconsistencies. For example, a borrower’s back-end DTI (the ratio of total monthly debt-to-gross monthly income) includes the front-end DTI (the ratio of monthly housing debt-to-gross monthly income) and, therefore, should always at least be equal to the front-end DTI. However, according to Treasury’s database, 29 percent of those in trial modifications and 40 percent of those who had trial modifications canceled had back-end DTIs that were less than their front-end DTIs. The quality of these data improved for those who received permanent modifications, with only 3 percent of these borrowers showing back-end DTIs that were less than the front-end DTIs. Treasury acknowledged that its HAMP database contained some inconsistencies, despite edit checks conducted by Fannie Mae as the HAMP administrator. According to Treasury, the inconsistencies continue because of servicers’ data-entry errors, data formatting mistakes such as entering percentages as decimals rather than whole numbers, and data mapping problems. Treasury said it was continuing to work with Fannie Mae to refine and strengthen data quality checks and that the data has and will continue to improve over time. For example, Treasury noted that since September 2010, it has worked to improve the quality of borrower and loan attributes such as back-end DTI and modification terms. Treasury officials said that the error rate on these data elements has dropped from 16 percent and 12 percent for trial and permanent modifications, respectively, to 2 percent and 10 percent. Treasury’s HAMP database also was missing a significant amount of information on borrowers’ race and ethnicity, resulting in an inability to date to assess whether HAMP is being fairly implemented across servicers. For example, as of September 30, 2010, race and ethnicity information was not available for 65 percent of non-GSE borrowers in active trial modifications. A significant portion of borrowers declined to report this information—that is, for 45 percent of non-GSE borrowers in active trial modifications the category was marked as “not provided by borrower.” However, for another 20 percent, some data are simply missing, with no category marked. Some of this information may be missing because servicers were not required to report borrowers’ race and ethnicity until after December 1, 2009. As a result, Treasury lacks complete information needed to be able to determine whether the first-lien modification program has been implemented fairly across all borrowers. In addition, Treasury acknowledged data-mapping problems with race and ethnicity data that resulted in some data being included in the system of record, but inadvertently excluded from the database. Combined, these factors resulted in a large proportion of borrowers without race and ethnicity information, as of September 30, 2010. According to Treasury officials, Fannie Mae was making improvements to the data mapping, which should allow Treasury to better evaluate whether HAMP is being implemented fairly across all borrowers. Treasury officials told us they anticipated that the more complete data would be ready to use in early 2011. On January 31, 2011, Treasury announced the availability of loan- level HAMP data to the public for the first time. The data files were as of November 30, 2010, and included information on borrowers’ race and ethnicity. According to Treasury, these data indicated that roughly 31 percent of borrowers who started trial modifications after December 1, 2009, did not report race and ethnicity data. Treasury also reported approximately 6 percent of data as not applicable or not reported by the servicer. In addition, roughly 57 percent of those who were denied or did not accept trial modifications did not report or were missing this information. Finally, Treasury’s HAMP database did not contain information on all borrowers who were denied HAMP, as some borrowers were denied before income information was collected for a net present value test. Treasury currently requires servicers to report identifying information, such as borrowers’ names and Social Security numbers, as well as the reason for denial for all borrowers denied modification, but other data elements—including income information, level of delinquency, LTV, and GSE or non-GSE status—is not required to be collected by servicers if borrowers are denied because they do not meet basic eligibility requirements such as the property being owner-occupied. According to data we received from Treasury, through September 30, 2010, some information was lacking on 85 percent of borrowers who were denied HAMP trial modifications, including monthly gross income amounts and the number of months in delinquency. Treasury noted that these data are incomplete because they are unobtainable by the servicers and not a good use of servicer resources to obtain. While we recognize that servicers may be unable to collect information from borrowers who were previously denied trial modifications, going forward it will be important for Treasury to collect sufficient information from servicers to assess program gaps. According to Treasury, it has requested servicers to report on borrowers who were denied HAMP when low volumes of these data were received. Because there have been more HAMP trial modification cancellations than conversions to permanent modifications, we evaluated Treasury’s reporting of the disposition paths, or outcomes, of borrowers who were denied or canceled from HAMP trial modifications and obtained additional information from six large MHA servicers to understand the extent to which these borrowers have been able to avoid foreclosure to date. While it appears that the majority of these borrowers had been able to avoid foreclosure as of the time of our data collection and Treasury’s survey, if borrowers are being evaluated for a loss mitigation option such as a proprietary modification and the servicer has also started foreclosure proceedings, Treasury’s data reporting template will result in a loan being reported only as a proprietary modification or the other applicable loss mitigation category, understating the number of borrowers who have had foreclosure proceedings started. In addition, Treasury’s reporting of outcomes for these borrowers does not differentiate between borrowers who received proprietary modifications and those who were still being evaluated for these modifications, some of whom will not ultimately receive them. For example, for six large servicers, Treasury reported that 43 percent of borrowers who had their trial modification canceled received proprietary modifications. However, the reported 43 percent includes both borrowers who had received proprietary modifications and those who were being evaluated for proprietary modifications. Data we collected from the same servicers indicate that only 18 percent of borrowers with canceled trial modifications received permanent proprietary modifications, while another 23 percent had pending but not yet approved permanent modifications. Without a complete picture of the outcomes of those borrowers who were denied or canceled from HAMP, Treasury cannot accurately evaluate the outcomes for these borrowers and determine whether further action may be needed to assist this group of borrowers. According to HAMP guidelines, servicers must consider all potentially HAMP-eligible borrowers for other loss mitigation options, such as proprietary modifications, payment plans, and short sales, prior to a foreclosure sale. To report the current outcomes of borrowers who applied for but did not receive a HAMP trial modification or had a HAMP trial modification canceled, Treasury surveys the eight largest HAMP servicers each month and publishes these data in the monthly servicer performance reports. However, Treasury’s requirements for reporting these data produce results that do not fully reflect all outcomes for borrowers who were denied or canceled from HAMP and overstate the proportion of some outcomes. First, in order to prevent double counting of transactions, the survey does not allow servicers to place a borrower in more than one outcome category. Additionally, servicers must follow the order in which Treasury lists the outcomes on the survey. However, this does not allow for the accurate reporting of borrowers being considered for multiple potential outcomes. For example, a servicer could be evaluating a borrower who had been denied a HAMP modification for a proprietary modification at the same time that the servicer started foreclosure proceedings. But the Treasury survey would capture only the proprietary modification, because that category is the first in the list of possible outcomes. Because servicers are allowed to evaluate borrowers for loss mitigation options while simultaneously starting foreclosure, Treasury’s requirement that borrowers be included in only one category, starting with proprietary modifications, likely overstates the proportion of borrowers with proprietary modifications while also understating the number of borrowers who have started foreclosure. Furthermore, a comparison of Treasury’s data to data we received from six large MHA servicers on the outcomes of borrowers denied a HAMP trial modification showed that Treasury’s requirement that servicers place borrowers according to a specific order of outcomes may result in an understatement of the number of borrowers becoming current. For example, according to the data we received, almost 40 percent of borrowers who were denied a HAMP trial modification became current without any additional assistance from the servicer as of August 31, 2010. In comparison, Treasury reported only 24 percent of borrowers became current after applying for but not receiving a HAMP trial modification through these same servicers. While differences may exist between the populations of these data, a servicer we spoke with noted one reason that the percentage of current borrowers in the Treasury survey was lower than the percentage reported in our data was Treasury’s requirement that servicers report outcomes in a certain order, with “borrower current” being in last place. As a result, borrowers are reviewed for all other outcomes before being reflected in this category. Placing borrowers only in one category according to a specific order may not reflect all of the outcomes experienced by these borrowers and may understate outcomes further down the list, such as starting foreclosure or becoming current. Second, while Treasury’s survey includes an “action pending” category, all six of the servicers we spoke with told us that Treasury had instructed them to include borrowers who were being evaluated for an outcome in their respective outcome categories, such as proprietary modification, rather than the “action pending” category. Treasury recently instructed servicers to use the action pending category only if a borrower had recently been denied a HAMP trial modification, had a HAMP trial modification canceled, or fallen out of another disposition path such as a proprietary modification, and the servicer has not yet determined the next step for the borrower. Because the proprietary modification category includes borrowers who are still being evaluated for modifications as well as those who have received them, the number of borrowers who actually received a proprietary modification cannot be determined from Treasury’s data. For example, for the outcomes of borrowers who had a canceled HAMP trial modification, we asked six large MHA servicers to separate borrowers who were being evaluated for permanent proprietary modifications from those who had actually received them. For these same six servicers, while Treasury reported that 43 percent of borrowers who canceled from a HAMP trial modification through August 2010 were in the process of obtaining a proprietary modification, the data we received indicated that 18 percent of these borrowers had received permanent proprietary modifications, and 23 percent were in the process for being approved for one. By including borrowers who received permanent proprietary modifications alongside borrowers who were still in the process for getting one, Treasury may not fully understand the extent to which servicers are providing permanent assistance to borrowers being denied or canceled from HAMP trial modifications. While Treasury has taken steps to collect data on the outcomes of borrowers who do not receive a HAMP trial or permanent modification— data that could be used to assess the extent to which these borrowers are receiving other loss mitigation programs—the way in which Treasury has asked servicers to report these data overstates the proportion of certain outcomes and understates others, such as starting foreclosure proceedings. In addition, Treasury’s reporting does not differentiate between those who have received a proprietary modification and those who are being evaluated for one. If the information presented in the monthly servicer performance reports does not fully reflect the outcomes of these borrowers, Treasury and the public will not have a complete picture of their outcomes. Further, Treasury cannot determine the extent to which servicers provided alternative loss mitigation programs to borrowers denied or canceled from HAMP or evaluate the need for further action to assist this group of borrowers. We requested data from six servicers on the outcomes of borrowers who (1) were denied a HAMP trial modification, (2) had a canceled HAMP trial modification, or (3) redefaulted from a HAMP permanent modification. According to the data we received, of the about 1.9 million GSE and non- GSE borrowers who were evaluated for a HAMP modification by these servicers as of August 31, 2010, 38 percent (713,038) had been denied a HAMP trial modification; 27 percent (505,606) had seen their HAMP trial modifications canceled; and 1 percent (20,561) had redefaulted from a HAMP permanent modification. We requested that the servicers report all of the outcomes borrowers had received and they separate those who were being evaluated for an outcome from those who had received them. According to the data we received, borrowers experienced different outcomes, depending on whether they were denied a HAMP trial modification, received but were canceled from a trial modification, or redefaulted from a permanent modification. According to these servicers’ data through August 31, 2010, borrowers who were denied HAMP trial modifications were more likely to become current on their mortgages without any additional help from the servicer (39 percent) than to have any other outcome (see fig. 5). According to one servicer, borrowers who were denied a HAMP trial modification were often current when they applied for a HAMP modification and, once denied, were likely to remain current. In addition, 9 percent of these borrowers paid off their loans. Twenty-eight percent of borrowers who had been denied trial modifications received or were in the process for receiving a permanent proprietary modification or a payment plan. Servicers initiated foreclosure proceedings on 17 percent at some point after being denied, while only 3 percent of borrowers completed foreclosure. Several servicers explained that loss mitigation efforts can often work in tandem, so a borrower could be referred for foreclosure and evaluated for another outcome at the same time, and borrowers who were referred for foreclosure may not necessarily complete it. Of those borrowers who were canceled from a HAMP trial modification, servicers often initiated actions that could result in the borrower retaining the home. Specifically, 41 percent of these borrowers received or were in the process for receiving a permanent proprietary modification, and 16 percent received or were in the process for receiving a payment plan (see fig. 6). However, servicers started foreclosure proceedings on 27 percent of borrowers at some point after the HAMP trial modification being canceled, but, similar to borrowers who were denied a HAMP trial modification during this time period, a small percentage completed foreclosure (4 percent). Compared with borrowers who were denied, borrowers who had a HAMP trial modification canceled were less likely to become current on their mortgages (15 percent) or to pay off their loan (4 percent). There were wide ranges in the outcomes among servicers we contacted for borrowers who were canceled from HAMP trial modifications (see table 1). For example, of those borrowers who had a canceled HAMP trial modification, one servicer reported that 26 percent had obtained a proprietary modification through August 31, 2010, compared with 14 percent for another servicer. In addition, for borrowers who had a canceled HAMP trial modification, one servicer reported foreclosure completion rates of almost 7 percent, while another servicer reported foreclosure completion rates of roughly 1 percent. Servicers reported a wide range of outcomes, which depend on factors such as the composition of loan portfolios and proprietary loss mitigation programs, including modifications, payment plans, and short sales. These programs can differ in design and may have, among other things, different eligibility requirements for borrowers. Finally, of the borrowers who redefaulted from a HAMP permanent modification, almost half were reflected in categories other than proprietary modification, payment plan, becoming current, foreclosure alternative, foreclosure, or loan payoff (see fig. 7). Twenty-eight percent of borrowers who redefaulted from permanent modifications were referred for foreclosure at some point after redefaulting, but, like borrowers denied or canceled from a HAMP trial modification, the percentage of borrowers who completed foreclosure remained low relative to other outcomes (less than 1 percent). Unlike borrowers who were denied or canceled, borrowers who redefaulted were less likely to receive or be in the process for receiving a permanent proprietary modification or payment plan after redefaulting, with 27 percent of borrowers receiving or in the process for receiving one of the outcomes. In addition, less than 1 percent of borrowers who redefaulted became current as of August 31, 2010. As noted above, servicers have reported that many borrowers who were denied, canceled, or redefaulted from HAMP have received or were being evaluated for proprietary modifications. According to HOPE NOW, servicers completed over 1.2 million proprietary modifications from January 2010 through December 2010, compared with roughly 513,000 permanent HAMP modifications (see fig. 8). In designing the HAMP program, Treasury stated that it had to balance the needs of taxpayers, investors, and borrowers and develop a program that would ensure consistent and equitable treatment of borrowers by multiple servicers. In contrast, servicers told us they had greater flexibility with respect to the types of borrowers and conditions under which they could offer proprietary modifications. First, several servicers told us their proprietary modification programs had fewer documentation requirements. According to HAMP guidelines, borrowers must submit all required documentation in order to be evaluated for and offered a HAMP modification, including a Request for Modification and Affidavit, a tax form, documentation to support income, and a Dodd-Frank Certification form. While Treasury has taken steps to streamline documentation requirements in the past, both Treasury and servicers acknowledge that borrowers’ failure to submit required documentation was one of the primary reasons for being denied or canceled from a HAMP trial modification. However, a servicer can offer a proprietary modification even if the borrower lacked all of the required documentation. For example, one servicer told us that if a borrower who was required to submit 10 documents for a proprietary modification submitted only 6, the servicer could still offer a modification if the 6 documents provided sufficient information. Second, several servicers told us they were able to offer more proprietary modifications than HAMP modifications or help borrowers whom HAMP cannot, because their proprietary modifications had fewer eligibility requirements, such as restrictions on occupancy type. Treasury announced early on that the HAMP program was not designed to help all borrowers, such as those with investment properties and second homes. For a borrower to be eligible for a modification under HAMP, the property must be owner occupied, and according to Treasury’s HAMP data, through September 2010, servicers have denied roughly 63,000 HAMP applicants (7 percent) who they said failed to meet this requirement. But all six servicers who provided us with information offered proprietary modification programs without this restriction, allowing them to reach borrowers who were ineligible for HAMP. One servicer we spoke with noted that it had a large portfolio of investment properties that do not meet the eligibility requirements for a HAMP modification. In addition, while HAMP guidelines require borrowers to have a front-end DTI above 31 percent, all of the servicers we spoke with indicated their proprietary modification programs also served borrowers who had front- end DTIs below 31 percent. The servicers explained that even with low DTIs many of these borrowers were still unable to make their mortgage payments because they had high levels of back-end debt, such as credit card balances and car loans. We previously reported that HAMP requires borrowers with high total household debt levels (postmodification DTI ratios greater than 55 percent) to agree to obtain counseling, but it does not require documentation that they actually received this counseling. We continue to believe that it is important that Treasury determine whether borrowers are receiving this counseling and whether the counseling requirement is having its intended effect of limiting redefa ults, as we recommended. When asked about the differences between effe ctive proprietary modifications and HAMP modifications, roughly 63 percent of housing counselors who responded to this question on our Web-based survey ranked the ability of proprietary modifications to reach borrowers with DTIs less than 31 percent as one of the main differences. According to Treasury’s HAMP data, through September 2010, roughly 215,000 borrowers (24 percent) who were denied HAMP were denied because they had a front-end DTI of less than 31 percent. Almost all of the servicers we received information from indicated that the eligibility requirements for their proprietary modification programs allowed mortgage balances that exceeded HAMP limits. portfolio comprised super-jumbo loans, many of which fell outside the HAMP mortgage balance limits. Roughly 106,000 borrowers (12 percent) who were denied HAMP trial modifications through September 2010 were denied because of ineligible mortgages. Fifty-two percent of housing counselors also identified higher mortgage balance limits as another key difference between proprietary modifications and HAMP modifications. For a one-unit property, the unpaid principal balance limit to be eligible for the HAMP program is $729,750; for a two-unit, $934,200; for a three-unit, $1,129,250; for a four-unit, $1,403,400. allowing servicers to bring a borrower’s payment down to a more affordable level for some borrowers. In addition, for a servicer to be required to offer a borrower a HAMP modification, HAMP requires the borrower to pass the NPV test with a front-end DTI ratio of 31 percent. However, some borrowers may fail the test at this level but would be able to pass with a higher DTI ratio—for example, at 38 percent. These borrowers may not be able to receive a HAMP modification, even though a DTI ratio of 38 percent may have been more affordable than their current mortgage payment. Some borrowers who are denied a HAMP modification due to a negative NPV result but have a positive NPV result with a higher front-end DTI may be offered a proprietary modification. For example, one servicer plans to use variable front-end DTI thresholds to bring borrowers’ DTI ratios into more affordable ranges. The servicer will calculate borrowers with front-end DTI ratios greater than 31 percent based on 31 percent, 35 percent, and 38 percent thresholds, and borrowers with front- end DTI ratios less than 31 percent could be brought down to a DTI as low as 24 percent if they pass the NPV test at this level. The servicer estimates that of 3,370 borrowers who were denied a HAMP trial modification because their front-end DTI was already below 31 percent or as a result of a negative NPV, 2,415 would pass the NPV test using the flexible front-end DTI ratio thresholds and could receive a proprietary modification. In addition, having the flexibility to bring borrowers’ front-end DTI ratios to below 31 percent allows servicers to account for borrowers’ back-end DTI ratios when offering proprietary modifications. Several of the servicers we spoke with had proprietary modification programs that considered borrowers’ overall affordability, or ability to pay, when modifying a mortgage, and the servicers calculated affordability differently. For example, one servicer addressed overall affordability by using a net spendable income calculation to determine a borrower’s monthly mortgage payment. According to the servicer, its net spendable income calculation factors in all of the borrower’s income and deducts all expenses, including credit cards and utility bills. This proprietary modification program was designed to leave the borrower with approximately 10 percent of net spendable income, with a minimum of $250 and a maximum of $1,000. Another servicer reported using family size to determine affordability. The servicer indicated that it calculated borrowers’ monthly payments based on the nature of the borrowers’ hardship, their current financial situation, and their change in circumstances, as well as a postmodification monthly net disposable income of $600 and an additional $100 per dependent. By incorporating family size, this proprietary modification program may be able to help some borrowers who may otherwise not qualify for HAMP. Because servicers had a variety of proprietary modification programs that calculated affordability in a number of ways, and because their loan portfolios differed, the changes in mortgage terms as a result of proprietary modifications varied across servicers. According to data we received from six servicers, roughly 655,000 borrowers had permanent proprietary modifications as of August 31, 2010. These borrowers had their interest rate reduced by an average of 2.35 to 3.87 percentage points, depending on the servicer. In addition, the amount of term extension varied by each servicer. Specifically, servicers extended mortgage terms by an average of 87 to 178 months for borrowers who had permanent proprietary modifications. Lastly, servicers forbore varying amounts of principal, ranging from an average of $33,971 to $116,488, or 16 percent to 60 percent of the unpaid principal balance prior to modification. While the number of proprietary modifications has outpaced the number of HAMP modifications, the sustainability of both types of modifications is still unclear. HAMP redefault rates have been relatively low to date, but it is likely too soon to draw conclusions about HAMP redefaults. While data on the redefault rates of HAMP and proprietary modifications are limited, the Office of the Comptroller of the Currency (OCC) and Office of Thrift Supervision (OTS) reported that 11 percent of HAMP modifications and 22 percent of proprietary modifications that started in the fourth quarter of 2009 were 60 or more days delinquent after 6 months. In addition, one servicer reported the redefault rates for its proprietary modifications were 26 percent at 6 months and roughly 40 percent at 12 months after the loan was modified, while another servicer reported redefault rates of 32 percent at 6 months and 51 percent at 12 months. Proprietary modifications may not reduce monthly mortgage payments as much as HAMP modifications, potentially affecting the ability of borrowers to maintain their modified payments. According to OCC and OTS, during the third quarter of 2010, proprietary modifications reduced monthly mortgage payments by an average of $332 per month, while HAMP modifications reduced them by an average of $585 per month. According to our analysis of Treasury’s HAMP data, borrowers who had a GSE or non-GSE HAMP permanent modification as of September 30, 2010, had their payments reduced by an average of $632, or 33 percent of the average payment before modification. According to the data we received from six servicers, for GSE and non-GSE loans, borrowers with a permanent proprietary modification as of August 31, 2010, had their monthly mortgage payments reduced from an average of $100 to $691 per month, or 7 to 30 percent of the average monthly payment before modification. In response to our survey, housing counselors provided several examples of borrowers who had received proprietary modifications that did not substantially reduce monthly mortgage payments and that, in some cases, increased payments. As we have seen, the extent to which modifications reduce monthly mortgage payments may correlate with the ability of borrowers to maintain modified payments. Specifically, OCC and OTS reported that modifications made in 2010 that reduced monthly mortgage payments by 20 percent or more resulted in a redefault rate of 12 percent 6 months after modification compared with 28 percent for modifications that reduced payments by 10 percent or less. However, servicers have told us their proprietary modification programs can serve borrowers with front-end DTIs below 31 percent—borrowers who would be ineligible for a HAMP modification. As a result, the average percentage monthly reduction for these borrowers may not be as high as it would be for those with a HAMP modification, because their premodification front-end DTI ratios were lower than those of borrowers who received a HAMP modification. Going forward, it will be important for Treasury to monitor redefault rates and understand how they differ across servicers and modification terms. We will also be looking at the redefault rates of HAMP and non-HAMP modifications, as well as the effectiveness of other foreclosure mitigation efforts, as part of our ongoing work looking at the broader federal response to the foreclosure crisis. HAMP and the newer MHA programs were part of an unprecedented response to a particularly difficult time for our nation’s mortgage markets. However, 2 years after Treasury first announced that it would use $50 billion in TARP funds for various programs intended to preserve homeownership and protect home values, foreclosure rates remain at historically high levels. While Treasury originally estimated that 3 to 4 million people would be helped by these programs, only 550,000 borrowers had received permanent HAMP first-lien modifications as of November 30, 2010, and the number of borrowers starting trial modifications has been rapidly declining since October 2009. Moreover, Treasury has experienced challenges in implementing its other TARP- funded housing initiatives. In particular, the 2MP program, which Treasury has stated is needed to create a comprehensive solution for borrowers struggling to make their mortgage payments, has had a slow start. According to six large MHA servicers, they have faced difficulties— matching errors and omissions—in using the database required for identifying second liens eligible for modification under the program. As a result, servicers told us that relatively few second liens had been modified as of August 2010, a year after program guidelines were first issued. Treasury has taken some steps to address the issues that have slowed down implementation of the program, but more could be done to inform potentially eligible borrowers about 2MP. Specifically, borrowers whose second liens may be eligible for modification under 2MP may not be aware of the program or of any errors in the matching process, as servicers are not required to inform borrowers receiving HAMP first-lien modifications that they could also be eligible for 2MP. Consequently, missed matches of first and second liens could go undetected, and some borrowers who were eligible for but not helped by the program are less able to keep up the payments on their first-lien HAMP modifications. HAFA and PRA, two other key components of Treasury’s TARP-funded homeownership preservation effort, have also had slow starts. In fact, servicers we spoke with did not expect HAFA to increase the overall number of short sales performed, primarily due to extensive program requirements that lengthen the time frames associated with a short sale under the program. While Treasury has recently revised its HAFA program requirements to allow servicers to bypass the HAMP first-lien program eligibility review for borrowers interested solely in participating in HAFA and relaxed other HAFA program requirements, it remains unclear the extent to which these changes will result in greater program participation. Additionally, because of the voluntary nature of the PRA program and concerns over the lack of program transparency, including the level of public reporting that will be available at the servicer level, it remains unclear how many borrowers will receive principal reductions under PRA. Treasury has stated that it will report on PRA activity when data are available, and we continue to believe that it will be important that this reporting includes the extent to which servicers determined that principal reduction was beneficial to investors but did not offer it, as we recommended in June 2010. If HAFA and PRA do not result in increased program participation, Treasury’s efforts to combat the negative effects associated with avoidable foreclosures will be compromised, potentially limiting the ability of Treasury efforts to preserve homeownership and protect home values. Further, Treasury could do more to apply lessons learned from its experience in implementing early HAMP programs to its more recent initiatives. We reported in June 2010 that the implementation of other TARP-funded homeownership preservation programs could benefit from lessons learned in the initial stages of HAMP implementation. Specifically, we noted that it would be important for Treasury to expeditiously develop and implement new programs while also developing sufficient program planning and implementation capacity, meaningful performance measures and remedies, and appropriate risk assessments in accordance with standards for effective program management. Already, 2MP, HAFA, and PRA have undergone several revisions, and servicers cited changing guidelines and short implementation periods as significant challenges in fully implementing the programs. In July 2009, we recommended that Treasury place a high priority on fully staffing vacancies in the Homeownership Preservation Office (HPO) and evaluating staffing levels and competencies. As of January 2011, Treasury has filled key positions in HPO, but has not conducted a formal assessment of its staffing levels despite the implementation of the newer programs. We continue to believe that it is essential that Treasury ensure that it has enough staff with the appropriate skills to govern TARP-funded housing programs effectively. While Treasury has conducted reviews of the readiness of servicers participating in 2MP, HAFA, and PRA to successfully implement the programs, a large majority of servicers did not provide all documentation required to demonstrate that the key tasks needed to support these programs were in place. It is imperative that Treasury take swift action to ensure that servicers have the ability to implement these programs since, as we have seen with the slow progress of the HAMP first-lien modification program, the success of these TARP-funded initiatives will be largely driven by the capacity and willingness of servicers to implement these programs in an expeditious and effective manner. In addition, Treasury has not developed program-specific performance measures against which to measure these programs’ success and has not specified the remedies it will take if servicers are not meeting performance standards. Without specific program measures and remedies, Treasury will not be able to effectively assess the outcomes of these programs and hold servicers accountable for performance goals. We continue to believe that it is important for Treasury to develop such performance measures and clear goals for them, as we have recommended. Treasury requires servicers to submit data on borrowers who have been evaluated for HAMP, and these data provide important information and insights on the characteristics of borrowers who are in trial and permanent HAMP modifications, who have been canceled from trial modifications, and who have redefaulted from permanent modifications. However, Treasury’s HAMP database also contains inaccurate or missing information on certain key variables, including LTV ratios and borrowers’ race and ethnicity. Treasury has stated that it is working to improve the quality of its data, and it will be important that the agency do so expeditiously. Complete and accurate information is important for Treasury to fully understand the characteristics of borrowers who HAMP has been unable to help or determine program compliance. Moreover, this information is important to identify what additional steps or adjustments that could be made to existing TARP-funded programs to better achieve the mandated goals of preserving homeownership and protecting property values. Finally, while Treasury has begun publicly reporting the outcomes for borrowers who have been denied or canceled from HAMP trial modifications, its reporting practices make it difficult to determine the extent to which these borrowers are helped by non-HAMP (proprietary) loan modifications. For example, data we collected from six large MHA servicers showed that only 18 percent of borrowers canceled from a HAMP trial modification had received a proprietary modification and an additional 23 percent had a proprietary modification pending. However, Treasury reported that 43 percent of these borrowers were in the process of receiving a proprietary modification with those same six servicers. Furthermore, Treasury’s system for reporting outcomes requires servicers to place borrowers in only one category even when borrowers are being evaluated for several possible outcomes, with proprietary modifications reported first. As a result, the proportion of borrowers with proprietary modifications is likely overstated relative to other possible outcomes, such as foreclosure starts. Without accurate reporting of borrower outcomes, Treasury cannot know the actual extent to which borrowers who are denied, canceled, or redefaulted from HAMP are helped by other programs or evaluate the need for further action to assist this group of homeowners. As part of its efforts to continue improving the transparency and accountability of MHA, we recommend that the Secretary of the Treasury take actions to require servicers to advise borrowers to notify their second-lien servicers once a first lien has been modified under HAMP to reduce the risk that borrowers with modified first liens are not captured in the LPS matching database and, therefore, are not offered second-lien modifications; ensure that servicers demonstrate they have the operational capacity and infrastructure in place to successfully implement the requirements of the 2MP, HAFA, and PRA programs; and consider methods for better capturing outcomes for borrowers who are denied, canceled, or redefaulted from HAMP, including more accurately reflecting what actions are completed or pending and allowing for the reporting of multiple concurrent outcomes, in order to determine whether borrowers are receiving effective assistance outside of HAMP and whether additional actions may be needed to assist them. We provided a draft of this report to Treasury for its review and comment, and we received written comments from the Acting Assistant Secretary for Financial Stability that are reprinted in appendix III. We also received technical comments from Treasury that we incorporated into the report as appropriate. In its written comments, Treasury stated that it appreciated our efforts in assessing the housing programs initiated under its TARP program and acknowledged the draft report’s description of the operational capacity and infrastructure challenges faced by servicers in implementing Treasury’s housing programs. In addition, Treasury noted that our research in proprietary modifications made by servicers outside of MHA was useful. However, Treasury stated that it believed that the draft report raised certain criticisms regarding the design and implementation of MHA that were unwarranted. First, Treasury stated that the draft report criticized Treasury for the number of changes made to its housing programs following their implementation, and its alleged failure to incorporate the lessons it learned from the first-lien HAMP program into the roll out and design of other MHA programs, such as HAFA. Treasury stated that the report should acknowledge the circumstances under which the programs were first implemented. In response, we added some additional language recognizing that HAMP and the newer MHA programs were part of an unprecedented response to a particularly difficult time for our nation’s mortgage markets. However, servicers we spoke with noted that ongoing changes to guidelines have presented challenges such as needing to update internal servicing systems and retrain staff which, in some cases, delayed program implementation. In addition, as noted in the draft report, Treasury has repeated some of the practices that were the focus of previous recommendations we had made for the first-lien program in its implementation of its newer MHA programs. For example, in our July 2009 report, we found that Treasury had not developed a means of systematically assessing servicers’ capacity to meet program requirements during program admission, and we recommended further action in this area to increase the likelihood of success of the program. In our review of the newer MHA programs, we also found that Treasury had not fully ensured that servicers had the capacity to successfully implement these programs. We continue to believe that such action is needed to better ensure the likelihood of success of these newer MHA programs. Second, Treasury raised concerns about the draft report’s comparison of HAMP modifications to proprietary modifications. Treasury noted that it did not believe that it was constructive to assess HAMP’s performance based on the goals of proprietary programs that are not government supported. We have added some additional language to the report to provide additional context to the report’s discussion of proprietary modifications. The purpose of this report was not to assess the performance of HAMP modifications based on the goals of proprietary modifications. Instead, the draft report provided a description of proprietary modifications and some of the ways that they differ from HAMP modifications. It does not suggest that the objective of HAMP modifications and proprietary modifications are or should be the same, particularly given Treasury’s responsibility to safeguard taxpayer dollars under HAMP. As noted by Treasury in its comment letter, there is little available information about these proprietary modifications, and the more that is known about their terms and outcomes, the easier is will be for policymakers and regulators to craft appropriate changes to MHA and other housing programs aimed at preventing avoidable foreclosures. Third, Treasury noted that the draft report criticized the completeness and quality of the data collected by Treasury related to HAMP modifications, and that it disagreed with the conclusion that missing or inaccurate information limits Treasury’s ability to identify program gaps. Treasury noted that it relies on data provided by the borrowers to the servicers and it has improved significantly over the past 6 months, especially as the program moved to verified income. Treasury stated that the data on permanent modifications is robust, allowing Treasury to determine gaps in programs and how to make improvements. In the draft report, we acknowledged that Treasury is working with Fannie Mae to improve the data and, particularly with borrower race and ethnicity information, the data has improved over time. However, it is equally important that Treasury obtain complete and accurate information on those who are denied or canceled from a HAMP trial modification. Without such information, Treasury cannot determine if servicers are implementing the program fairly or whether additional actions may be necessary to address the needs of borrowers who are denied or canceled from HAMP trial modification. Going forward, it will be important for Treasury to continue to improve the quality of its HAMP data as this information is important to identify what additional steps or adjustments could be made to existing TARP-funded housing programs to better achieve the mandated goals of preserving homeownership and protecting property values. We are sending copies of this report to interested congressional committees and members of the Congressional Oversight Panel, Financial Stability Oversight Board, Special Inspector General for TARP, Treasury, the federal banking regulators, and others. This report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To examine the status of the Department of Treasury’s (Treasury) second- lien modification, principal reduction, and foreclosure alternatives programs and the design and implementation challenges Treasury and servicers have faced with these programs to date, we spoke with and obtained information from six large Making Homes Affordable (MHA) servicers, including the four largest servicers participating in the Second- Lien Modification Program (2MP) at the start of our review. These six servicers were: American Home Mortgage Servicing, Inc.; Bank of America; CitiMortgage; JP Morgan Chase Bank; OneWest Bank; and Wells Fargo Bank. We determined these as six large MHA servicers based on the amount of Troubled Asset Relief Program (TARP) funds they were allocated for loan modification programs. These six servicers collectively represented 72 percent of the TARP funds allocated to participating servicers. For each of these six servicers, we reviewed their 2MP, Home Affordable Foreclosure Alternatives (HAFA), and the Principal Reduction Alternatives (PRA) guidance, policies, procedures, process flows, training materials, and risk assessments, as applicable; and interviewed management staff. We also reviewed 2MP, HAFA, and PRA documentation issued by Treasury, including the supplemental directives related to 2MP, HAFA, and PRA, readiness assessments of servicers, and reporting process flows. We also spoke with officials at Treasury to understand the challenges faced in implementing these programs and the steps taken by Treasury to assess the capacity needs and risks of these programs, as well as steps taken to measure the programs’ success. We spoke with trade associations representing investors, mortgage insurers, servicers, and an organization representing homeowners and community advocates. Finally, we reviewed the Standards for Internal Control in the Federal Government to determine the key elements needed to ensure program stability and adequate program management. To examine the characteristics of homeowners who the Home Affordable Modification Program (HAMP) has been able to help, we obtained and analyzed Treasury’s HAMP data in its system of record, Investor Reporting/2 (IR/2), through September 30, 2010. We reviewed Treasury guidelines on reporting requirements for HAMP, including the information servicers are required to report for borrowers who were denied trial modifications, and spoke with Treasury and Fannie Mae officials to understand potential inconsistencies and gaps in the data. We determined that the data was sufficiently reliable for our purposes. We also used the data to perform an econometric analysis of factors that contribute to borrowers’ likelihood of seeing their trial modifications canceled (see appendix II for more details). We received and incorporated feedback on our model from Treasury and others. To obtain housing counselors’ views of borrowers’ experiences with HAMP, we conducted a Web-based survey of housing counselors who received funding from NeighborWorks America, a national nonprofit organization created by Congress to provide foreclosure prevention and other community revitalization assistance to the more than 230 community-based organizations in its network. We received complete responses from 396 counselors. This report does not contain all the results from the survey. The survey and a more complete tabulation of the results will be part of an upcoming report. Finally, to examine the outcomes for borrowers who were denied or fell out of HAMP trial or permanent modifications, we reviewed HAMP program documentation on policies for evaluating these borrowers for other loss mitigation programs. We reviewed the outcomes of borrowers who applied for but did not receive a HAMP trial modification or who had a canceled HAMP trial modification as reported by Treasury in the monthly MHA servicer performance reports. We obtained documentation from Treasury and interviewed servicers to understand how Treasury collects data on the outcomes of these borrowers. In addition, we obtained data from the six large MHA servicers noted earlier in this appendix. Specifically, we obtained and analyzed data on the outcomes of all borrowers who were denied a HAMP trial modification, had a canceled HAMP trial modification, or redefaulted on a HAMP permanent modification; the number of proprietary modifications completed; and the characteristics of the terms of these proprietary modifications. The servicers provided the data between when they began participating in the HAMP program and August 31, 2010, or the date in which they submitted their August 2010 reporting to Treasury (e.g., September 6, 2010). According to the data we received, the number of trial modifications offered by these six servicers represented 72 percent of the total number of trial modifications offered by all servicers as reported by Treasury through September 2, 2010. We determined that these data were reliable for the purposes of our report. To understand why servicers may offer more proprietary modifications than HAMP modifications, we reviewed data on the number of completed proprietary modifications published by HOPE NOW, an alliance between counselors, mortgage companies, investors, and other mortgage market participants. In addition, we reviewed documentation on the terms and eligibility requirements of the proprietary modification programs offered by the six servicers participating in our review. In addition, we interviewed these servicers about the features of their proprietary modification programs. Also, through our Web-based survey of housing counselors, we received responses on the differences between effective proprietary modifications and HAMP modifications, as well as examples of effective and ineffective proprietary modifications. Finally, to understand the sustainability of HAMP and proprietary modifications, we reviewed data published by the Office of the Comptroller of the Currency and the Office of Thrift Supervision on the redefault rates and monthly payment reduction of HAMP modifications, as well as data we collected from servicers on the redefault rates, terms, and monthly payment reductions of their GSE and non-GSE proprietary modifications. We conducted this performance audit from July 2010 through March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on the audit objectives. To describe the characteristics of the borrowers and mortgages that have been canceled from the trial modification, we used an econometric analysis rather than present descriptive statistics since it allowed us to control for the impacts of potential confounding factors, including differences across servicers as well as default-risk differences among the borrowers that are not observable (unobserved borrower heterogeneity). Servicers who participate in the Home Affordable Modification Program (HAMP) are required to provide periodic loan level data to Fannie Mae in its capacity as the administrator for the HAMP program. Specifically, servicers are required to report data at the start of the trial modification period and during the modification trial period, for loan set up of the approved modification, and monthly after the modification is set up on Fannie Mae’s system. The data used in our econometric analysis are for HAMP loans as of September 30, 2010. The data have one record per loan, with information on the loan status—whether the loan is denied for trial modification, has entered the trial plan, and its outcome (i.e., converted to the permanent modification, or still active in the trial plan, or has fallen out). We excluded loans that entered the trial plan from July 2010 through September 2010 (which is the end of the current data set) because not enough time had likely elapsed for loans in this pool to have defaulted or been canceled. Through September 30, 2010, several servicers have signed up for HAMP. Seventeen of them, we categorized as “large” based on the Treasury reported data on “estimated eligible loans 60 days or more delinquent,” have over 90 percent of the loans—the remainder of the servicers were grouped into the “other” category. For the universe of 1,361,832 loans that had entered the trial period plan as of September 30, 2010, the average cancellation rate was ta 50 percent. The sample we used for the regression analysis, based on da availability, consists of 727,095 loans (53 percent of the universe), with an average cancellation rate of 50 percent. The sample data exclude servicers whose share of loans or fallout rates for the sample differed a lot from that of the universe; there are 13 “large” and “other” servicers. Following the literature, we used a reduced-form probability model to help determine the effects of the characteristics of the borrower and mortgage on HAMP trial loan cancellations. Accordingly, based on economic reasoning, data availability, the HAMP guidelines, and previous studies on loan performance, we use probabilistic models to estimate the likelihood that a loan modified under the trial period plan does not convert to a permanent modification. The dependent variable for the cancellations is binary, which equals 1 if a loan that entered the trial-period plan did not convert to permanent modification and 0 otherwise. The explanatory variables that we include in the model are conditioned by the available data (see table 3 for the list of the variables). We estimated cancellation rates using binomial logistic probability (logit) models, an approach commonly used in economics to examine choices and evaluate various events or outcomes. The models included fixed effects for the servicers, which allowed us to account for both the observed and unobserved characteristics of the servicers. We also included state-level fixed effects to control for factors that vary across the states but are the same within a state, such as the type of foreclosure laws and other state policies on mortgages. The basic regression results from using the probability model, described above and the data in table 3, are presented in table 4. Most of the variables were statistically significant at the 5 percent level or better, and typically consistent with our expectations as to the direction of their impacts. We discuss below the key findings, based on statistically (and economically) significant changes in the likelihood of cancellation, using the estimated marginal effects of the explanatory variables. Stated income. Loans that entered the trial plan using stated income documentation were 52 percent more likely to be canceled, compared to verified income. This effect was consistent with expectations since these borrowers are likely not to able to provide verified documentation when requested. Trial length. Trial periods that last for 4 months or less were about 58 percent more likely to be canceled compared to those that stay in the trial plan for a longer term. A longer stay in the trial plan implies the borrower’s payments are current and, therefore, less likely to be canceled. This result is generally consistent with the hazard models of mortgage performance that indicate that loans that are likely to default do so much earlier than later. Delinquency status. Borrowers who were 60 days or 90 days or more past due on their mortgages before the trial-period plan, compared to borrowers who were current, were 6 and 9 percent more likely to have their loans canceled, respectively; thus, the longer the delinquency status the more likely the cancellation. This effect is consistent with expectations. Payment Reduction. The reduction in payment generally results from interest rate reduction and extension of the loan term. Loans that receive reductions in payments (of principal and interest) of more than 10 percent, compared to reductions that are less than 10 percent or less (which include no reductions and increases in payments), were 5 percent less likely to be canceled. This result is expected since the payment reductions increase the affordability of the mortgage, a key objective of HAMP. MLTV ratios. Loans with an LTV between 120 and 140 percent were 7 percent less likely to be canceled, while loans with an LTV of more than 140 percent were 8 percent less likely to be canceled, compared to those with MLTV 80 percent or less. This effect is contrary to expectation. The reason for this outcome is while borrowers with high MLTV were more likely to have their trial modifications canceled due to not making their payments, they were disproportionately less likely to have their trial modifications canceled because of insufficient documentation, compared to those with MLTV at or below 80 percent. Principal reductions. Loans that received principal reductions in the form of principal forgiveness of between 1 and 50 percent of their total loan balance were 6 percent less likely to be canceled compared with those that did not receive principal forgiveness. We note that only about 2 percent of the loans have received principal forgiveness. Servicer effects. We estimated the changes in the likelihood of cancellation for the servicers using the marginal effects in table 4. To examine the extent to which there is variation in the likelihood of cancellation across servicers, we defined three distinct borrower profiles and calculated the likelihood of cancellation for each of these borrower profiles for each servicer. The “typical” borrower profile used mean values for the borrower population; the “current” borrower profile used mean values for all characteristics except that the borrower was assumed to be current (less than 30 days delinquent); and the “delinquent” profile used mean values for all characteristics except that the borrower was assumed to be delinquent by 90 days or more. Because delinquency status predicts higher likelihood of cancellations for borrowers who are seriously delinquent (90 days or more delinquent) compared to being current (less than 30 days delinquent), the likelihood of cancellation increases with increased delinquency for each servicer. The results presented in fig. 9 show significant variation across the servicers for cancellations of trial modifications. In particular, for the large servicers, the likelihood of cancellation increased for about one-half of them (ranging from 1 to 24 percent) but decreased for the other half (ranging from -2 to -39 percent) for the “typical” borrower. Although the major reasons for the cancellations vary across the servicers, they were primarily due to incomplete documentation, trial plan default, and ineligible mortgage. State-level effects. For the state-level effects we estimated the change in the likelihood of trial cancellations across the states using the marginal effects in table 4, similar to the analysis of the servicer effects. The results presented in fig.10 show that the changes in the likelihoods of cancellations are higher in most of the states, including high mortgage foreclosure states such as Arizona, California, Florida, Michigan, and Nevada—which have over 40 percent of the trial loans. Several checks were conducted to ensure that our results are reliable— including the sample used, model specification, and estimation techniques. In all cases, our key results were generally unchanged. Specifically, we excluded the servicer effects and state-level effects, and included the start time of the trial to account for the housing and economic conditions at the time of the modification. We also estimated robust standard errors to ensure that the tests of significance are reliable. Furthermore, although we could not use the fixed-effects technique to control for unobserved heterogeneity across the borrowers because we do not have repeat observations on the borrowers, we attempted to incorporate unobserved heterogeneity among the borrowers using mixed multinomial logit estimation. This is intended to help capture the differential in risk preferences and idiosyncratic differences among the borrowers that are not captured by the explanatory variables in the models we estimated. However, we could not estimate the mass point locations with any precision. Finally, we noted that since loans enter and exit HAMP over time, these results may not necessary pertain in the future. 1. We acknowledge that Treasury’s HAMP program is part of an unprecedented response to a particularly difficult time in our nation’s mortgage markets. As noted in the report, we also acknowledge that Treasury took steps to consult with servicers on the design and implementation of 2MP, HAFA, and PRA. However, Treasury has repeated some of the practices that were the focus of previous recommendations we had made for the first-lien program in its implementation of its newer MHA programs. For example, in July 2009, we recommended that Treasury develop a means of systematically assessing servicers’ capacity to meet program requirements during program admission. While Treasury has begun assessing servicers’ capacity to implement 2MP, HAFA, and PRA, it has not ensured that servicers have sufficiently demonstrated they have the capacity to successfully and expeditiously implement these programs. In addition, we recommended in June 2010 that Treasury finalize and implement benchmarks for performance measures under the first-lien modification program, as well as develop such measures for the newly announced programs. Treasury has not developed these benchmarks, either for the first-lien or the subsequent programs, making it difficult to hold servicers accountable for performance and assess the extent to which they have been successful. The pages referenced in this comment are now pages 20 to 24. 2. Treasury indicated that the draft report criticized it for a lack of a database that includes second liens matched to HAMP-modified first liens. The draft report does not criticize Treasury for the lack of a database. Rather, the report notes that Treasury worked with LPS to develop a database and has taken steps to improve the quality of the data. We also note that servicers reported difficulties with the matching of first and second liens, including concerns about the accuracy and completeness of the data, which contributed to the slow initial implementation of 2MP. As a result of these challenges, servicers had been modified relatively few second liens a year after program guidelines were issued. The pages referenced in this comment are now pages 13 and 24. 3. Treasury noted that the draft report suggested that extensive program requirements and unclear guidance were obstacles to the program’s success. The section of the draft report that discussed concerns about extensive program requirements was associated with the implementation challenges with the HAFA program only. The draft report noted that Treasury itself has acknowledged these obstacles and has since revised many of the HAFA program requirements that were identified as contributing to the slow implementation of the program. With regard to Treasury’s comments about program guidance, we clarified the language in the report to focus on the programs’ changing guidance. Servicers told us that ongoing program revisions presented challenges such as needing to retrain staff and in some cases delayed program implementation. The pages referenced in this comment are now pages 15 and 48. 4. Treasury noted that the draft report faulted Treasury for failing to set numerical goals, especially with regard to the new programs. Treasury stated the programs were launched under challenging and unprecedented circumstances, making it extremely difficult to predict how many homeowners will respond to servicer solicitations, provide requisition documentation, or accept the modification when offered. As we, the Congressional Oversight Panel, and SIGTARP have previously noted, establishing key performance metrics and reporting on individual servicers’ performance with respect to those metrics are critical to transparency and accountability. As such, we continue to believe that it is important that Treasury implement our June 2010 recommendation that it develop measures and benchmarks for its newer MHA program. Without pre-established performance measures and goals, Treasury will not be able to effectively assess the outcomes of its MHA programs or hold servicers accountable for their performance. The pages referenced in this comment are now pages 12, 23, and 49. 5. Treasury stated that the draft report left the impression that Treasury chose not to collect data on proprietary modifications. In fact, the report notes that Treasury does collect data on the post-HAMP disposition paths. We believe the information that Treasury collects through its eight largest HAMP servicers provides important and useful information to policymakers on the disposition of borrowers denied or canceled from HAMP trial and permanent modifications. However, as noted in the draft report, we believe the way in which the information is collected makes it difficult to understand the outcomes of these borrowers. Without accurate reporting of borrower outcomes, Treasury cannot know the actual extent to which borrowers who are denied, canceled, or redefaulted from HAMP are helped by other programs or evaluate the need for further action to assist this group of homeowners. The pages referenced in this comment are now pages 41 to 45. 6. Treasury commented that the draft report suggested a criticism of HAMP modifications because it notes that proprietary modifications are more flexible and easier to secure than a HAMP modification. Treasury notes that it manages a national, publicly financed program and must balance the interest of taxpayers, investors, and borrowers in designing and implementing the program. We agree. Our observation is not intended to suggest that HAMP adopt the flexibility of proprietary modifications. We are simply describing what is known about proprietary modifications. Moreover, the report notes that the long- term sustainability of both types of modifications is unclear, particularly for proprietary modifications because these modifications may not reduce the monthly payments of borrowers as much as HAMP modifications have. The pages referenced in this comment are now pages 41 to 45. 7. Treasury stated that it believed that the overall conclusion reached in the draft report that the gaps in data limit Treasury’s ability to identify program gaps is inaccurate and misleading. Treasury noted that some of the examples of missing or inaccurate data are outliers. It also noted that the data on permanent modifications is robust and that data provided by servicers has improved significantly over the past 6 months. We have added text in the report to acknowledge that Treasury’s data, particularly on the race and ethnicity of borrowers, has improved over time, and that the reporting in the most recent public file represents an improvement over the data we received as of September 30, 2010. We also note that Treasury has worked with Fannie Mae to make improvements to the data. While we acknowledge that progress has been made in the quality and accuracy of the data reported by servicers, we believe that it is critical that Treasury continue to work toward improving the data so that it and policymakers can understand the characteristics of both borrowers who have been helped by HAMP as well as those who could not be helped by HAMP. This information will be essential to identifying what additional steps or adjustments could be made to existing TARP- funded programs or other government programs to prevent avoidable foreclosures to better achieve the goals of preserving homeownership and protecting property values. The pages referenced in this comment are now pages 25 to 40. 8. Treasury stated that it instructed servicers to report borrowers in a single disposition path to avoid double counting of borrowers and, thereby, provide a clear view of the current path the population is following through the progression of potential loss mitigation outcomes. However, the current method of data collection can distort the current disposition status of borrowers because borrowers are often “dual-tracked” (e.g., being evaluated for a proprietary modification while also starting the foreclosure process). Reflecting the full range of possible outcomes these borrowers face would improve Treasury’s understanding of the extent to which borrowers are helped by other programs and assist any evaluation of the need for further action to assist this group of homeowners. 9. Treasury stated that it disagreed with the draft report’s conclusion that its programs had not been fully implemented. We revised the language in the report to more clearly state that the implementation of Treasury’s MHA programs had gotten off to a slow start and reiterated that actions needed to be taken by Treasury to better ensure the success of its programs. The page referenced in this comment is now page 47. In addition to the contacts named above, Lynda Downing, Harry Medina, John Karikari (Lead Assistant Directors); Tania Calhoun; Emily Chalmers; William Chatlos; Grace Cho; Rachel DeMarcus; Marc Molino; Mary Osorno; Jared Sippel; Winnie Tsen; Jim Vitarello; and Heneng Yu made important contributions to this report.
Two years after the Department of the Treasury (Treasury) first made available up to $50 billion for the Making Home Affordable (MHA) program, foreclosure rates remain at historically high levels. Treasury recently introduced several new programs intended to further help homeowners. This report examines (1) the status of three of these new programs, (2) characteristics of homeowners with first-lien modifications from the Home Affordable Modification Program (HAMP), and (3) the outcomes for borrowers who were denied or fell out of first-lien modifications. To address these questions, GAO analyzed data from Treasury and six large MHA servicers. The implementation of Treasury's programs to reduce or eliminate second-lien mortgages, encourage the use of short sales or deeds-in-lieu, and stimulate the forgiveness of principal has been slow and limited activity has been reported to date. This slow pace is attributed in part to several implementation challenges. For example, servicers told GAO that the start of the second-lien modification program had been slow due to problems with the database Treasury required them to use to identify potentially eligible loans. Additionally, borrowers may not be aware of their potential eligibility for the program. While Treasury recently revised its guidelines to allow servicers to bypass the database for certain loans, servicers could do more to alert HAMP first-lien modification borrowers about the new second-lien program. Implementation of the foreclosure alternatives program has also been slow due to program restrictions, such as the requirement that borrowers be evaluated for a first-lien modification even if they have already identified a potential buyer for a short sale. Although Treasury has recently taken action to address some of these concerns, the potential effects of its changes remain unclear. In addition, Treasury has not fully incorporated into its new programs key lessons from its first-lien modification program. For example, it has not obtained all required documentation to demonstrate that servicers have the capacity to successfully implement the newer programs. As a result, servicers' ability to effectively offer troubled homeowners second-lien modifications, foreclosure alternatives, and principal reductions is unclear. Finally, Treasury has not implemented GAO's June 2010 recommendation that it establish goals and effective performance measures for these programs. Without performance measures and goals, Treasury will not be able to effectively assess the outcomes of these programs. Treasury's data provide important insights into the characteristics of borrowers participating in the HAMP first-lien modification program, but data were sometimes missing or questionable. More homeowners have been denied or canceled from HAMP trial loan modifications than have received permanent modifications. To understand which borrowers HAMP has been able to help, GAO looked at Treasury's data on borrowers in HAMP trial and permanent modifications. These data showed that HAMP borrowers had reduced income and high debt, but the reliability and integrity of some of Treasury's information was questionable. GAO recommends that Treasury require servicers to advise borrowers to contact servicers about second-lien modifications and ensure that servicers demonstrate the capacity to successfully implement Treasury's new programs. GAO also recommends that Treasury consider methods to better capture outcomes for borrowers denied or canceled from HAMP first-lien modifications. Treasury acknowledged challenges faced by servicers in implementing the program, but felt that certain criticisms of MHA were unwarranted. However, we continue to believe that further action is needed to better ensure the effectiveness of these programs.
The federal real property portfolio is vast and diverse, totaling more than 900,000 buildings and structures—including office buildings, warehouses, laboratories, hospitals, and family housing—and worth hundreds of billions of dollars. The six largest federal real property holding agencies— DOD; GSA; the U.S. Postal Service; and the Departments of Veterans Affairs (VA), Energy, and the Interior—occupy 87.6 percent of the total square footage in federal buildings. Overall, the federal government owns approximately 83 percent of this space and leases or otherwise manages the rest; however, these proportions vary by agency. For example GSA, the central leasing agent for most agencies, now leases more space than it owns. The federal real property portfolio includes many properties the federal government no longer needs. In May 2011, the White House posted an interactive map of excess federal properties on its Web site, noting that the map illustrates a sampling of over 7,000 buildings and structures currently designated as excess. These properties range from sheds to underutilized office buildings and empty warehouses. We visited an office and warehouse complex in Fort Worth, Texas that was listed on the Web site. Ten of the properties listed on the Web site as part of the Fort Worth complex were parceled together and auctioned in May 2011, but the sale is not yet final. The structures ranged from large warehouses to a concrete slab. (See fig. 1.) Work we are currently doing for this subcommittee on how federal agencies designate excess federal real property will include visits to other properties from around the country that are considered excess. After we first designated federal real property as a high-risk area in 2003, the President issued Executive Order 13327 in February 2004, which established new federal property guidelines for 24 executive branch departments and agencies. Among other things, the executive order called for creating the interagency FRPC to develop guidance, collect best practices, and help agencies improve the management of their real property assets. DOD has undergone four BRAC rounds since 1988 and is currently implementing its fifth round. Generally, the purpose of prior BRAC rounds was to generate savings to apply to other priorities, reduce property deemed excess to needs, and realign DOD’s workload and workforce to achieve efficiencies in property management. As a result of the prior BRAC rounds in 1988, 1991, 1993, and 1995, DOD reported that it had reduced its domestic infrastructure, and transferred hundreds of thousands of acres of unneeded property to other federal and nonfederal entities. DOD data show that the department had generated an estimated $28.9 billion in net savings or cost avoidances from the prior four BRAC rounds through fiscal year 2003 and expects to save about $7 billion each year thereafter, which could be applied to other higher priority defense needs. These savings reflect money that DOD has estimated it would likely have spent to operate military bases had they remained open. However, we found that DOD’s savings estimates are imprecise because the military services have not updated them regularly despite our prior reported concerns on this issue. The 2005 BRAC round affected hundreds of locations across the country through 24 major closures, 24 major realignments, and 765 lesser actions, which also included terminating leases and consolidating various activities. Legislation authorizing the 2005 BRAC round maintained requirements established for the three previous BRAC rounds that GAO provide a detailed analysis of DOD’s recommendations and of the BRAC selection process. We submitted the results of our analysis in a 2005 report and testified before the BRAC Commission soon thereafter. Since that time, we have published annual reports on the progress, challenges, and costs and savings of the 2005 round, in addition to numerous reports on other aspects of implementing the 2005 BRAC round. The administration and real-property-holding agencies have made progress in a number of areas since we designated federal real property as high risk in 2003. In 2003, we reported that despite the magnitude and complexity of real-property-related problems, there had been no governmentwide strategic focus on real property issues. Not having a strategic focus can lead to ineffective decision making. As part of the government’s efforts to strategically manage its real property, the administration established FRPC—a group composed of the OMB Controller and senior real property officers of landholding agencies—to support real property reform efforts. Through FRPC, the landholding agencies have also established asset management plans, standardized real property data reporting, and adopted various performance measures to track progress. The asset management plans are updated annually and help agencies take a more strategic approach to real property management by indicating how real property moves the agency’s mission forward; outlining the agency’s capital management plans; and describing how the agency plans to operate its facilities and dispose of unneeded real property, including listing current and future disposal plans. According to several member agencies, FRPC no longer meets regularly but remains a forum for agency coordination on real property issues and could serve a larger role in future real property management. We also earlier reported that a lack of reliable real property data compounded real property management problems. The governmentwide data maintained at that time were unreliable, out of date, and of limited value. In addition, certain key data that would be useful for budgeting and strategic management were not being maintained, such as data on space utilization, facility condition, historical significance, security, and age. We found that some of the major real-property-holding agencies faced challenges developing reliable data on their real property assets. We noted that reliable governmentwide and agency-specific real property data are critical for addressing real property management challenges. For example, better data would help the government determine whether assets are being used efficiently, make investment decisions, and identify unneeded properties. In our February 2011 high-risk update, we reported that the federal government has taken numerous steps since 2003 to improve the completeness and reliability of its real property data. FRPC, in conjunction with GSA, established the Federal Real Property Profile (FRPP) to meet a requirement in Executive Order 13327 for a single real property database that includes all real property under the control of executive branch agencies. FRPP contains asset-level information submitted annually by agencies on 25 high-level data elements, including four performance measures that enable agencies to track progress in achieving property management objectives. In response to our 2007 recommendation to improve the reliability of FRPP data, OMB required, and agencies implemented, data validation plans that include procedures to verify that the data are accurate and complete. Furthermore, GSA’s Office of Governmentwide Policy (OGP), which administers the FRPP database, instituted a data validation process that precludes FRPP from accepting an agency’s data until the data pass all established business rules and data checks. In our most recent analysis of the reliability of FRPP data, we found none of the previous basic problems, such as missing data or inexplicably large changes between years. In addition, agencies continue to improve their real property data for their own purposes. From a governmentwide perspective, OGP has sufficient standards and processes in place for us to consider the 25 elements in FRPP as a database that is sufficiently reliable to describe the real property holdings of the federal government. Consequently, we removed the data element of real property management from our high-risk list this year. The government now has a more strategic focus on real property issues and more reliable real property data, but problems related to unneeded property and leasing persist because the government has not addressed underlying legal and financial limitations and stakeholder influences. In our February 2011 high-risk update, we noted that the legal requirements agencies must adhere to before disposing of a property, such as requirements for screening and environmental cleanup, present a challenge to consolidating federal properties. Currently, before GSA can dispose of a property that a federal agency no longer needs, it must offer the property to other federal agencies. If other federal agencies do not need the property, GSA must then make the property available to state and local governments and certain nonprofit organizations and institutions for public benefit uses, such as homeless shelters, educational facilities, or fire or police training centers. As a result of this lengthy process, GSA’s underutilized or excess properties may remain in an agency’s possession for years and continue to accumulate maintenance and operations costs. We have also noted that the National Historic Preservation Act, as amended, requires agencies to manage historic properties under their control and jurisdiction and to consider the effects of their actions on historic preservation. The average age of properties in GSA’s portfolio is 46 years, and since properties more than 50 years old are eligible for historic designation, this issue will soon become critically important to GSA. The costs of disposing of federal property further hamper some agencies’ efforts to address their excess and underutilized real property problems. For example, federal agencies are required by law to assess and pay for any environmental cleanup that may be needed before disposing of a property—a process that may require years of study and result in significant costs. In some cases, the cost of the environmental cleanup may exceed the costs of continuing to maintain the excess property in a shut-down status. The associated costs of complying with these legal requirements create disincentives to dispose of excess property. Moreover, local stakeholders—including local governments, business interests, private real estate interests, private-sector construction and leasing firms, historic preservation organizations, various advocacy groups for citizens that benefit from federal programs, and the public in general— often view federal facilities as the physical face of the federal government in their communities. The interests of these multiple, and often competing stakeholders, may not always align with the most efficient use of government resources and can complicate real property decisions. For example, as we first reported in 2007, VA officials noted that stakeholders and constituencies, such as historic building advocates or local communities that want to maintain their relationship with VA, often prevent the agency from disposing of properties. In 2003, we indicated that an independent commission or governmentwide task force might be necessary to help overcome stakeholder influences in real property decision making. In 2007, we recommended that OMB, which is responsible for reviewing agencies’ progress on federal real property management, assist agencies by developing an action plan to address the key problems associated with decisions related to unneeded real property, including stakeholder influences. OMB agreed with the recommendation. The administration’s recently proposed legislative framework, CPRA, is somewhat responsive to our recommendation in that it addresses legal and financial limitations, as well as stakeholder influences in real property decision making. With the goal of streamlining the disposal process, CPRA provides for an independent board to determine which properties it considers would be the most appropriate for public benefit uses. This streamlined process could reduce both the time it takes for the government to dispose of property and the amount the government pays to maintain property. To provide financial assistance to the agencies, CPRA establishes an Asset Proceeds and Space Management Fund from which funds could be transferred to reimburse an agency for necessary costs associated with disposing of property. Reimbursing agencies for the costs they incur would potentially facilitate the disposal process. To address stakeholder influences, the independent board established under CPRA would, among other things, recommend federal properties for disposal or consolidation after receiving recommendations from civilian landholding agencies and would independently review the agencies’ recommendations. Grouping all disposal and consolidation decisions into one set of proposals that Congress would consider in its entirety could help to limit local stakeholder influences at any individual site. CPRA does not explicitly address the government’s overreliance on leasing. In 2008, we found that decisions to lease selected federal properties were not always driven by cost-effectiveness considerations. For example, we estimated that the decision to lease the Federal Bureau of Investigation’s field office in Chicago, Illinois, instead of constructing a building the government would own, cost about $40 million more over 30 years. GSA officials noted that the limited availability of upfront capital was one of the reasons that prevented ownership at that time. Federal budget scorekeeping rules require the full cost of construction to be recorded up front in the budget, whereas only the annual lease payments plus cancellation costs need to be recorded for operating leases. In April 2007 and January 2008, we recommended that OMB develop a strategy to reduce agencies’ reliance on costly leasing where ownership would result in long-term savings. We noted that such a strategy could identify the conditions under which leasing is an acceptable alternative, include an analysis of real property budget scoring issues, and provide an assessment of viable alternatives. OMB concurred with this recommendation but has not yet developed a strategy to reduce agencies’ reliance on leasing. One of CPRA’s purposes—to realign civilian real property by consolidating, colocating, and reconfiguring space to increase efficiency—could help to reduce the government’s overreliance on leasing. Our current work examines the efficiency of the federal government’s real property lease management in more detail. DOD has undergone five BRAC rounds to realign DOD’s workload to achieve efficiencies and savings in property management, including reducing excess properties. The BRAC process, much like CPRA, was designed to address obstacles to closures or realignments, thus permitting DOD to close or realign installations and its missions to better use its facilities and generate savings. Certain key elements of DOD’s process for closing and realigning its installations may be applicable to the realignment of real property governmentwide. Some of these key elements include establishing goals, developing criteria for evaluating closures and realignments, developing a structural plan for applying selection criteria, estimating the costs and savings anticipated from implementing recommendations, establishing a structured process for obtaining and analyzing data, and involving the audit community. DOD’s BRAC process was designed to address certain challenges to base closures or realignments, including stakeholder interests, thereby permitting the department to realign its missions to better use its facilities, generate savings, and sometimes also resulting in the disposal of property. The most recent defense base closure and realignment round followed a historical analytical framework, carrying many elements of the process forward or building upon lessons learned from the department’s four previous rounds. DOD used a logical, reasoned, and well-documented process. In addition, we have identified lessons learned from DOD’s 1988, 1991, 1993, and 1995 rounds, and we have begun an effort to assess lessons learned from the 2005 BRAC round. DOD’s 2005 BRAC process consisted of activities that followed a series of statutorily prescribed steps, including: Congress established clear time frames for implementation; DOD developed options for closure or realignment recommendations; BRAC Commission independently reviewed DOD’s proposed President reviewed and approved the BRAC recommendations; and Congress did not disapprove of the recommendations and thus they became binding. In developing its recommendations for the BRAC Commission, DOD relied on certain elements in its process that Congress may wish to consider as it evaluates the administration’s proposed legislation for disposing of or realigning civilian real property, as follows: Establish goals for the process. The Secretary of Defense emphasized the importance of transforming the military to make it more efficient as part of the 2005 BRAC round. Other goals for the 2005 BRAC process included fostering jointness among the four military services, reducing excess infrastructure, and producing savings. Prior rounds were more about reducing excess infrastructure and producing savings. Develop criteria for evaluating closures and realignments. DOD initially proposed eight selection criteria, which were made available for public comments via the Federal Register. Ultimately, Congress enacted the eight final BRAC selection criteria in law and specified that four selection criteria, known as the “military value criteria,” were to be given priority in developing closure and realignment recommendations. The primary military value criteria include such considerations as an installation’s current and future mission capabilities and the impact on operational readiness of the total force; the availability and condition of land, facilities, and associated airspace at both existing and potentially receiving locations; the ability to accommodate a surge in the force and future total force requirements at both existing and potentially receiving locations; and costs of operations and personnel implications. In addition, Congress specified that in developing its recommendations, DOD was to apply “other criteria,” such as the costs and savings associated with a recommendation; the economic impact on existing communities near the installations; the ability of the infrastructure in existing and potential communities to support forces, missions, and personnel; and environmental impact. Further, Congress required that the Secretary of Defense develop and submit to Congress a force structure plan that described the probable size of major military units—for example, divisions, ships, and air wings—needed to address probable threats to national security based on the Secretary’s assessment of those threats for the 20-year period beginning in 2005, along with a comprehensive inventory of global military installations. In authorizing the 2005 BRAC round, Congress specified that the Secretary of Defense publish a list of recommendations for the closure and realignment of military installations inside the United States based on the statutorily-required 20-year force- structure plan and infrastructure inventory, and on the selection criteria. Estimate costs and savings to implement closure and realignment recommendations. To address the cost and savings criteria, DOD developed and used the Cost of Base Realignment Actions model (COBRA) a quantitative tool that DOD has used since the 1988 BRAC round to provide consistency in potential cost, savings, and return-on- investment estimates for closure and realignment options. We reviewed the COBRA model as part of our review of the 2005 and prior BRAC rounds and found it to be a generally reasonable estimator for comparing potential costs and savings among alternatives. As with any model, the quality of the output is a direct function of the input data. Also, DOD’s COBRA model relies to a large extent on standard factors and averages and does not represent budget quality estimates that are developed once BRAC decisions are made and detailed implementation plans are developed. Nonetheless, the financial information provides important input into the selection process as decision makers weigh the financial implications—along with military value criteria and other considerations—in arriving at final decisions about the suitability of various closure and realignment options. However, according to our assessment of the 2005 BRAC round, actual costs and savings were different from estimates. Establish an organizational structure. The Office of the Secretary of Defense emphasized the need for joint cross-service groups to analyze common business-oriented functions. For the 2005 BRAC round, as for the 1993 and 1995 rounds, these joint cross-service groups performed analyses and developed closure and realignment options in addition to those developed by the military services. In contrast, our evaluation of DOD’s 1995 BRAC round indicated that few cross-service recommendations were made, in part because of the lack of high-level leadership to encourage consolidations across the services’ functions. In the 1995 BRAC round, the joint cross-service groups submitted options through the military services for approval, but few were approved. The number of approved recommendations that the joint cross-service groups developed significantly increased in the 2005 BRAC round. This was in part, because high-level leadership ensured that the options were approved not by the military services but rather by a DOD senior-level group. Establish a common analytical framework. To ensure that the selection criteria were consistently applied, the Office of the Secretary of Defense, the military services, and the seven joint cross-service groups first performed a capacity analysis of facilities and functions in which all installations received some basic capacity questions according to DOD. Before developing the candidate recommendations, DOD's capacity analysis relied on data calls to hundreds of locations to obtain certified data to assess such factors as maximum potential capacity, current capacity, current usage, and excess capacity. Then, the military services and joint cross-service groups performed a military value analysis for the facilities and functions based on primary military value criteria, which included a facility’s or function’s current and future mission capabilities, physical condition, ability to accommodate future needs, and cost of operations. Involve the audit community to better ensure data accuracy. The DOD Inspector General and military service audit agencies played key roles in identifying data limitations, pointing out needed corrections, and improving the accuracy of the data used in the process. In their oversight roles, the audit organizations, who had access to relevant information and officials as the process evolved, helped to improve the accuracy of the data used in the BRAC process and thus strengthened the quality and integrity of the data used to develop closure and realignment recommendations. For example, the auditors worked to ensure certified information was used for BRAC analysis, and reviewed other facets of the process, including the various internal control plans, the COBRA model, and other modeling and analytical tools that were used in the development of recommendations. There are a number of important similarities between BRAC and a civilian process as proposed in the administration’s CPRA. As a similarity, both BRAC and CPRA employ the all-or-nothing approach to disposals and consolidations, meaning that once the final list is approved by the independent commission or board, it must be accepted or rejected as a whole. Another important similarity is that both the BRAC and proposed CPRA processes call for an independent board or commission to review recommendations. A key difference between BRAC and the administration’s proposed CPRA is that while the BRAC process placed the Secretary of Defense in a central role to review and submit candidate recommendations to the independent board, CPRA does not provide for any similar central role for civilian agencies. The BRAC process required the Secretary of Defense to develop and submit recommendations to the BRAC Commission for review. In this role, the Office of the Secretary of Defense reviewed and revised the various candidate recommendations developed by the four military services and the seven separate joint cross service groups. In contrast, the administration’s proposed CPRA does not place any official or organization in such a central role to review and submit the recommendations proposed by various federal agencies to the independent board for assessment and approval. Another key difference between BRAC and CPRA is the time period in which the commission will be in existence. CPRA, as proposed by the administration, is a continuing commission which will provide recommendations twice a year for 12 years, whereas, the BRAC Commission convened only for those years in which it was authorized. For example, after the most recent 2005 BRAC round, the Commission terminated by law in April 2006. However, we believe the need for a phased approach involving multiple rounds of civilian property realignments is warranted given it may take several BRAC-like rounds to complete the disposals and consolidations of civilian real property owned and leased by many disparate agencies including GSA, VA, Department of the Interior, Department of Energy, and others. In closing, the government has made strides toward strategically managing its real property and improving its real property planning and data over the last 10 years, but those efforts have not yet led to sufficient reductions in excess property and overreliance on leasing. DOD’s experience with BRAC could help the process move forward to dispose of unneeded civilian real property and generate savings for the taxpayer. Chairman Carper, Ranking Member Brown, and Members of the Subcommittee, this concludes our prepared statement. We will be pleased to answer any questions that you may have at this time. For further information on this testimony, please contact David Wise at (202) 512-2834 or wised@gao.gov regarding federal real property, or Brian Lepore at (202) 512-4523 or leporeb@gao.gov regarding the BRAC process. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. In addition to the contacts named above, Keith Cunningham, Assistant Director; Laura Talbott, Assistant Director; Vijay Barnabas; Elizabeth Eisenstadt; Amy Higgins; Susan Michal-Smith; Crystal Wesco; and Michael Willems made important contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government holds more than 45,000 underutilized properties that cost nearly $1.7 billion annually to operate, yet significant obstacles impede efforts to close, consolidate, or find other uses for these properties. GAO has designated federal real property management as a high-risk area, in part because of the number and cost of these properties. The Office of Management and Budget (OMB) is responsible for reviewing federal agencies' progress in real property management. In 2007, GAO recommended that OMB assist agencies by developing an action plan to address key obstacles associated with decisions related to unneeded real property, including stakeholder influences. In May 2011, the administration proposed legislation, referred to as the Civilian Property Realignment Act (CPRA), to, among other things, establish a legislative framework for disposing of and consolidating civilian real property and that could help limit stakeholder influences in real property decision making. This statement identifies (1) progress the government has made toward addressing obstacles to federal real property management, (2) some of the challenges that remain and how CPRA may be responsive to those challenges, and (3) key elements of the Department of Defense's (DOD) base realignment and closure (BRAC) process that could expedite the disposal of unneeded civilian properties. To do this work, GAO relied on its prior work, and reviewed CPRA and other relevant reports. In designating federal real property management as a high-risk area, GAO reported that despite the magnitude and complexity of real-property-related problems, there was no governmentwide strategic focus on real property issues and governmentwide data were unreliable and outdated. The administration and real-property-holding agencies have subsequently improved their strategic management of real property by establishing an interagency Federal Real Property Council designed to enhance real property planning processes and implementing controls to improve the reliability of federal real property data. Even with this progress, problems related to unneeded property and leasing persist because the government has not yet addressed other challenges to effective real property management, such as legal and financial limitations and stakeholder influences. CPRA is somewhat responsive to these challenges. For example, CPRA proposes an independent board that would streamline the disposal process by selecting properties it considers appropriate for public benefit uses. This streamlined process could reduce disposal time and costs. CPRA would also establish an Asset Proceeds and Space Management Fund that could be used to reimburse agencies for necessary disposal costs. The proposed independent board would address stakeholder influences by recommending federal properties for disposal or consolidation after receiving recommendations from civilian landholding agencies and independently reviewing the agencies' recommendations. CPRA does not explicitly address the government's overreliance on leasing, but could help do so through board recommendations for consolidating operations where appropriate. GAO is currently examining issues related to leasing costs and excess property. Certain key elements of DOD's BRAC process--which, like CPRA, was designed to address obstacles to closures or realignments--may be applicable to the disposal and realignment of real property governmentwide. These elements include establishing goals, developing criteria for evaluating closures and realignments, estimating the costs and savings anticipated from implementing recommendations, and involving the audit community. A key similarity between BRAC and CPRA is that both establish an independent board to review agency recommendations. A key difference is that while the BRAC process places the Secretary of Defense in a central role to review and submit candidate recommendations to the independent board, CPRA does not provide for any similar central role for civilian agencies.
The “Support for Others” business line covers the Corps’ activities related to interagency and international support. Corps headquarters, divisions, and districts are all involved in developing the President’s budget request for the Corps. As part of the executive budget formulation process, Corps headquarters staff, with input and data from division and district offices, develop a budget request for the agency. Once the Corps completes its internal review, the Assistant Secretary of the Army for Civil Works approves and submits its budget to OMB for review. OMB recommends to the President whether to support or change the Army’s proposals and the decisions made during OMB’s budget review process culminate in the President’s budget request transmitted to Congress at the beginning of February. Shortly thereafter the Corps provides budget justification materials that support the President’s request in more detail to the House and Senate Appropriations committees’ subcommittees. The documents that typically make up the budget presentation for the Corps are the congressional budget justification, the Press Book, and the Five Year Development Plan. The budget justification for the fiscal year 2010 budget request includes details on construction projects and investigations projects—studies to determine whether the Corps should initiate construction projects—included in the budget request, including a narrative description and such details as the total estimated federal cost and amount allocated in prior years. It also provides some information on other Corps accounts such as the Flood Control and Coastal Emergencies account. The information included in the Press Book has varied in recent years, but the Press Book accompanying the fiscal year 2010 budget request consisted primarily of a listing of all construction, investigations, and operation and maintenance (O&M) projects included in the budget request. The listing is organized by state and specifies the amount requested for each project. Finally, the Corps has in the past included a Five Year Development Plan as part of the budget presentation, though it did not for the fiscal year 2010 or 2011 budget requests. The most recent Five Year Development Plan contained descriptions of nine civil works accounts and summaries of its business line programs, including past accomplishments and future challenges. It also included project-level details for the Construction and Investigations accounts with projected funding requirements for the current fiscal year and 4 subsequent fiscal years. It did not include project-level details for the O&M account. In addition to the information contained in the budget presentation, congressional staff members may request additional information as needed for decision making. The submission of the President’s budget request to Congress marks the beginning of the congressional phase of the budget process. The budget request is often a starting point for congressional actions and Congress typically makes changes that reflect its priorities. For example, Congress has historically appropriated more funding to the Corps for a greater number of projects than have been included in the President’s budget request. About 84 percent of the President’s fiscal year 2010 budget request for the Corps’ civil works program was for three appropriations accounts— Construction, Investigations, and O&M—all of which are focused on specific projects or studies.6, The Construction account includes construction and major rehabilitation projects related to navigation, flood control, water supply, hydroelectric power, and environmental restoration. The Investigations account funds studies to determine the necessity, feasibility, and returns to the nation for potential solutions to water resource problems, as well as design, engineering, and other work. The O&M account focuses on preserving, operating, and maintaining river and harbor projects that have already been constructed. Table 1 summarizes the fiscal year 2010 budget request and appropriations for these three accounts. The Formerly Utilized Sites Remedial Action Program, another Corps account, is also project-based. Total (3 accounts) A breakdown by account of the fiscal year 2010 budget request is shown in figure 2. O&M $2,504 The total civil work budet requet in fical year 2010 wa $5.125 billion. Since fiscal year 2006 the Corps has received appropriations of over $5 billion annually for its civil works program through the Energy and Water Development Appropriations Act. Committee and conference reports accompanying the appropriations bills include specific allocations of funding for individual projects. The Corps also typically receives funds, particularly for construction projects, from each project’s local sponsor, which may be a state, tribal, county, or local agency or government. In addition to the funding received through annual appropriations acts, the Corps received supplemental appropriations in 6 of the past 8 fiscal years. Some supplemental appropriations have been designated for specific activities. For example, a Corps official told us that in fiscal year 2009 the agency received supplemental funding of about $5.8 billion for hurricane protection in Louisiana. In recent years, most supplemental funding provided to the Corps has been used for expenses related to the consequences of 2005 Gulf Coast hurricanes, including Hurricane Katrina. According to the Corps official, funding has also been directed to expenses related to the consequences of hurricanes Gustav and Ike (both 2008 hurricanes), as well as the 2008 Midwest floods. The Corps also received $4.6 billion in fiscal year 2009 through the American Recovery and Reinvestment Act. Figure 3 shows the amount of funding the administration has requested for the Corps’ civil works program and the amount the Corps has received, both through annual and supplemental appropriations, from fiscal years 2003 through 2010. The Corps’ strategic plan for its civil works program lays out its goals and objectives and its strategies for achieving them. The Corps’ current strategic plan covers fiscal years 2004 through 2009, and the Corps is planning to issue an updated version that will cover fiscal years 2010 through 2014. The goals listed in the most recent strategic plan are: (1) provide sustainable development and integrated management of the nation’s water resources; (2) repair past environmental degradation and prevent future environmental losses; (3) ensure that projects perform to meet authorized purposes and evolving conditions; (4) reduce vulnerabilities and losses to the nation and the Army from natural and man-made disasters, including terrorism; and (5) be a world-class engineering organization. Prior to fiscal year 2006, the Corps’ budget formulation process was relatively decentralized, with divisions playing a significant role. According to Corps officials, the Corps’ previous budget formulation process for the Construction, Investigations, and O&M accounts started with district staff, who developed a request for their geographic area. Next, division staff integrated the district office projects into a single divisionwide portfolio of projects. Finally, headquarters staff consolidated each of the divisionwide portfolios into a single agencywide portfolio. Under the former process, each division was authorized an amount of funding, which division officials would allocate with two conditions: (1) all projects were required to meet administration priorities, and (2) construction and investigations projects that reached a certain stage were required to have benefits that at least equaled costs. Corps officials told us that they sought to provide continued funding to all ongoing projects that fit within administration guidelines. Beginning in fiscal year 2006 the Corps introduced what it refers to as performance-based budgeting as a way to focus funding requests on those projects with the highest anticipated return on investment, not on all ongoing projects as it sought to do in the past. Under the new process, Corps headquarters began playing a greater role in selecting projects, using performance criteria that emphasize agencywide priorities. Specifically, although districts and divisions continue to collect and develop project data, ranking of construction and investigations projects is now done only at the headquarters level. While division staff still rank O&M projects, a Corps official told us that headquarters staff review these rankings to make sure that they are consistent with Corps-wide guidance and result in decisions that emphasize agencywide priorities. Then, they consolidate the O&M requests across business lines and divisions to a highest priority grouping. According to a Corps official, the use of performance-based budgeting has allowed the Corps to present OMB with various funding options based on performance criteria. While the Corps also presented OMB with different options prior to fiscal year 2006, the official told us that under that process these options reflected regional priorities. Under its current budget formulation process, the Corps uses performance metrics to evaluate projects’ estimated future outcomes, and gives priority to those with the highest expected returns for the national economy and the environment, as well as those that reduce risk to human life. The Corps’ written budget guidance, the Budget Engineer Circular (Budget EC), details the data that should be developed for each project to support budgetary decisions. For example, the Corps calculates the economic benefits of most construction and investigations projects using a BCR. The Corps uses projects’ BCRs to evaluate them against each other and determine whether they will be given priority in the budget request. According to Corps and OMB staff, each year OMB sets minimum BCR thresholds that some construction and investigations projects must meet to be included in the budget request. If projects do not meet the designated BCR thresholds, they may qualify in other ways, such as by restoring a nationally significant ecosystem or addressing risk to human life. The use of these metrics to evaluate projects provides the Corps with a mechanism to give priority to projects that, based on the current method of calculation, may not generate any economic benefits or have relatively low BCRs, but benefit nationally significant ecosystems or address risk to human life. For O&M projects, imminent risk to human life and the amount of commercial tonnage transported on a waterway are examples of the types of factors described in the Budget EC that influence the priority of a navigation project. Additionally, the Corps’ use of performance metrics makes projects in certain geographic areas more likely to be included in the budget request, since they produce higher returns on investment. For example, since a primary input in BCR calculations for Flood Risk Management projects is the value of property for which damage would be prevented as a result of the project, projects in areas with high property values—such as in California—tend to have higher BCRs. Ecosystem restoration projects with national significance are also given priority under this process. More specifically, the Everglades in Florida has consistently been among the projects included in this category, and over the past 5 years has been the project with the most funding requested. In addition, the risk to human life metric is affected by population density, so more densely populated areas tend to be given priority. According to Corps and OMB staff, another effect of the performance criteria used as part of the current budget process is that fewer construction and investigation projects have been included in the budget request in recent years. Corps officials also attributed the decrease in number of projects to available funding and budget cutoffs, such as the BCR. From fiscal year 2001 to 2010, the number of construction projects included in the budget request decreased by about 52 percent, and the number of investigations projects decreased by about 79 percent. Though the number of construction and investigations projects decreased, the average amount requested per project has increased over time. For example, the average request per construction project went from $7.0 million in fiscal year 2001 to $17.3 million in fiscal year 2010. In contrast to trends in the Construction and Investigations accounts, the use of the ranking metrics does not appear to have had a significant effect on the O&M account; the number of projects within the O&M account has remained relatively stable. From fiscal year 2001 to 2010, the number of O&M projects requested increased by about 7 percent. Corps officials told us that the relative consistency of the O&M account is partially due to the emphasis on critical routine projects and activities. Because the performance metrics used to evaluate O&M projects—such as the amount of commercial tonnage transported on a waterway—tend to be consistent, and a large portion of projects are routine (occurring every year or on an otherwise cyclical basis), the projects given priority tend to be the same from year to year. Additionally, they told us that because there are more project activities of lower value in the O&M account, changes to specific projects generally do not affect the overall request amount as significantly as variations in the projects in the Construction account do. In fiscal year 2010, the average amount requested per O&M project was $2.8 million. Budget trends are discussed in more detail in appendix IV. OMB staff that review the budget request for the Corps concurred that the nature of the O&M account results in more stability in project selection than in the Construction and Investigations accounts. The Corps uses performance metrics in its budget formulation process that primarily focus on anticipated outcomes with limited evidence of how performance information measuring demonstrated performance factors into decisions on budget requests. In part, the Corps focuses on anticipated outcomes because most of the construction and investigations projects being considered in the budget request are new or have not yet been completed, and thus have not generally begun to achieve benefits. Because the O&M account includes projects that have already been constructed, the Corps incorporates ongoing performance information, such as assessments of whether infrastructure meets current engineering and industry standards, to a greater extent when budgeting for these projects. Even though the overall focus for budget formulation of the three accounts is on anticipated outcomes, Corps officials told us that they monitor the progress of projects underway through review boards established at the district, division, and headquarters levels within the agency. These review boards generally meet monthly and focus on project management issues. These issues include whether projects are meeting financial goals and other milestones, such as awarding contracts on schedule. Review boards also discuss progress on two of the nine business lines each month and, on average, each business line is reviewed at least twice annually. A Corps headquarters official told us that the performance metrics presented at review boards demonstrate good performance, areas that need improvement, and situations where focused leadership attention would be useful. For example, Corps documentation showed that in a meeting in which it focused on the Flood Risk Management business line, the headquarters-level program review board looked at measures such as the number of dam safety assessments completed and the percentage of dams rated as unsafe. Although review boards collect a variety of performance information, the Corps does not have written guidance establishing a process for incorporating their findings into budget formulation decisions. Our previous work on performance-based budgeting found that federal agencies that were successful in measuring their performance worked to ensure that decisions were based on complete information. The Corps collects numerous data and has detailed processes for evaluating projects during the budget formulation process; however, in the absence of a documented process for considering information on demonstrated performance—such as the performance information discussed during review board meetings on whether projects are on time and on budget—the Corps may miss opportunities to make the best use of this information. Additionally, without a documented process it is not clear how information from the review boards shapes program priorities and affects decision making. Our prior work has emphasized the importance of transparency in federal agencies’ budget presentations. While the budget presentation for the Corps includes summaries of project categories, business lines, and accounts, it lacks summary-level information on the relationships and trade-offs made across these groups. For example, the presentation for the fiscal year 2010 budget request describes the primary criteria used to evaluate both construction and O&M projects. However, it does not include an explanation of how the Corps makes trade-offs among the project types in these accounts—for example, the budget presentation does not include an explanation of the priority given to dam safety projects over other construction project categories, or the effects that this has on the other categories. It also lacks an explanation of the impact of emphasizing one account over another. Congressional users of the budget presentation told us that having summary information on how decisions that significantly affect the budget request are made would enhance their understanding of the budget process. The projects that received appropriations and the projects for which funding was requested included both new and ongoing projects. Project numbers do not include construction or investigations projects in the Mississippi River and Tributaries (MR&T) account. projects receiving funds than were included in the budget request, an information gap is created when an administration highlights its priority projects, but does not provide sufficient information on other ongoing projects that may continue to have resource needs. Congressional users of the Corps’ budget presentation told us that they are interested in previously funded projects not included in the budget request, and that not having information on these projects limits the ability of Congress to make fully informed decisions when making appropriations decisions. A Corps official told us that the Corps would be able to include in the budget presentation information on projects funded in the previous year. Senate appropriators have also expressed interest in greater project level information for the O&M account. Specifically, the Senate conference report accompanying the fiscal year 2010 Energy and Water Development Appropriations bill requested that the Corps provide in the fiscal year 2011 budget presentation, at a minimum, detailed project information justifying the need for each O&M project. For example, although the fiscal year 2010 budget request for the Corps included $2.5 billion for the O&M account (approximately 49 percent of the total request), the budget presentation for the Corps did not include detailed project-level information for this account or sufficient summary information to understand the status of O&M project implementation against agency projections or other benchmarks. Similarly, the Press Book lists all O&M projects in the request and the amount requested for each, but it does not provide any detailed information on how requested funding will be used. Furthermore, although the fiscal year 2010 budget justification provided detailed project- level information, such as narrative descriptions and previous funding allocations, for construction and investigations projects, it did not include any information on requested O&M projects. Congressional users of the budget presentation told us that such information would increase the usefulness and transparency of the presentation. Following up on the Senate’s request, the fiscal year 2011 budget request for the Corps included summary-level information describing how funding for each requested O&M project would be used. The Senate did not specify whether its request applies to fiscal years beyond 2011. Finally, the budget presentation for the Corps does not include information on how much carryover of unobligated appropriations is available to potentially offset new requests for projects that were previously funded, which congressional users of the budget presentation stated would be useful. With this information, they can consider how much of the previous year’s funding remains available for obligations. Moreover, Corps officials told us that carryover amounts have increased in recent years. The budget request for the Corps includes aggregate information on carryover balances by account, but neither it nor the budget presentation includes information on how much carryover is available for specific projects. Accordingly, Congress has not been able to consider the full level of resources available for projects when making its appropriations decisions. Corps review boards routinely review whether projects are meeting financial milestones, so carryover balance information is available. However, a Corps official told us that project- level carryover estimates would not be available until after budget materials are submitted to Congress. According to this official, while the Corps is not able to include this information in the budget presentation, the Corps would be able to provide Congress with project carryover amounts separately and before final appropriations decisions are made. The Corps’ move toward including performance information in the budget formulation process has given priority to the projects with the highest anticipated returns on investment. Although the Corps collects data on the demonstrated performance of ongoing projects and on a case-by-case basis may use this information in budget decisions, it does not have a documented process to incorporate this type of information in budget formulation decisions. Without an established process to ensure that decision makers are aware of this information, relevant information may not always be considered in budget decisions. The current budget formulation process emphasizes agencywide priorities and focuses on projects with the highest estimated returns; however, the budget presentation for the Corps continues to lack transparency and key information that could be of use to congressional decision makers. While the budget presentation for the Corps includes a description of the primary metrics used to evaluate projects, it does not include a description of how decisions and trade-offs were made across project categories, business lines, or accounts. Although annual appropriations and accompanying committee and conference reports sometimes designate funds to be used for specific construction, investigations, and O&M projects, the budget presentation for the Corps lacks two types of project-level information that could be useful to congressional decision makers. First, the budget presentation lacks information on many projects that were previously funded and may continue to have resource needs. Because appropriators are likely to consider these projects for funding again, information on these projects is relevant and useful in the decision-making process. Second, the budget presentation lacks information on the amount of unobligated appropriations that remain available for each project. Such project-level information would help congressional decision makers make better informed appropriations and oversight decisions. Without such information it is unlikely that Congress can have a clear understanding of (1) how key trade-off decisions affected the budget request, (2) how new funding requests relate to funding decisions for existing projects with continuing resource needs, and (3) whether a given budget request and the underlying projects support longer term goals and priorities across component operations. To ensure that all relevant information is considered during the budget formulation process, we recommend that the Secretary of Defense direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to take the following action: Establish a documented process to incorporate assessments of ongoing project performance, such as information from review boards, into the budget formulation process. To improve the transparency and usefulness of the Corps’ budget presentation to Congress, building on the information the appropriators have requested the Corps provide, we recommend that the Secretary of Defense direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to work with OMB and Congress to take the following four actions: Include in the annual budget presentation for the Corps summary-level information on how the budget request reflects decisions made across project categories, business lines, and accounts. Continue to include in the annual budget presentation for the Corps project-level details for the O&M account, including an explanation of how the requested funding for each project will be used. Provide project-level information on all projects with continuing resource needs, either as part of the budget presentation or as supplementary information. As a supplement to the budget presentation, provide Congress with information on the estimated carryover of unobligated appropriations that remain available for each project. We provided a draft of this report to the Department of Defense for official review and comment. The department provided us with written comments, which are summarized below and reprinted in appendix V. The department concurred with four of our recommendations and did not concur with one. Specifically, the department concurred with our recommendations that the Corps include additional information in the budget presentation, including summary-level information on how the budget request reflects decisions, project-level details for the O&M account, and project-level information on all projects with continuing resource needs. The department also concurred with our recommendation that the Corps provide Congress with information on the estimated carryover of unobligated appropriations that remain for each project. The department did not agree, however, with our recommendation that the Corps establish a documented process to incorporate assessments of ongoing project performance, such as information from review boards, into the budget formulation process. The department stated that its existing mechanisms to incorporate assessments of project performance into the budget formulation process are adequate and that project review findings are used in making budgeting decisions. It also provided an example of how actual performance of O&M projects is used to determine budget priority. While we agree that the Corps’ current processes may incorporate project review findings, we continue to believe that establishing a documented process for the use of such information in the Corps’ budget formulation would ensure that the Corps routinely makes the best use of all available information. Additionally, having a documented process would improve understanding of how information from the review boards shapes program priorities and affects decision making. Moreover, our report discusses the Corps’ use of information on project progress, such as whether schedule and budgetary milestones are being met, through review boards at the district and division levels. However, according to Corps officials, this review board information affects funding decisions on a case-by-case rather than routine basis. Finally, we have clarified in our report that we agree that the Corps’ budget formulation process for the O&M account reflects actual performance. Nonetheless, we continue to believe that the overall emphasis of the Corps’ budget process is on anticipated rather than demonstrated performance. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact Denise M. Fantone, (202) 512–6806 or fantoned@gao.gov or Anu K. Mittal at (202) 512-3841 or mittala@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To analyze the U.S. Army Corps of Engineers’ (Corps) budget formulation process, we examined (1) the information the Corps uses in its budget formulation process and the implications of the process and (2) whether the President’s budget request for the Corps is presented so that agency priorities are clear and the proposed use of funds transparent. We focused our review on three of the Corps’ accounts—Construction, Investigations, and Operation and Maintenance (O&M). Most civil works funding is designated to be used for specific projects, and projects are classified mainly into these accounts. The Formerly Utilized Sites Remedial Action Program (FUSRAP) is also project-based, but we did not include it in our review because of its specialized focus on sites contaminated during the development of atomic weapons and relatively small size (the fiscal year 2010 budget request for the Corps included 24 FUSRAP projects). To understand the Corps’ budget formulation process and identify the information used to evaluate projects, we reviewed documentation related to the process. We examined the Corps’ Budget Engineer Circular used in formulation of the fiscal year 2011 budget request. This document guides the formulation of the budget within the Corps. We reviewed Corps construction project rankings from fiscal year 2006, the first year in which the Corps ranked construction projects using performance information, through fiscal year 2010, the most recent year from which ranking information was available at the time of our review. In addition, we reviewed records of the agency’s internal project performance reviews and documentation of the data collected as part of the budget formulation process. We also interviewed Corps headquarters officials in the Program Integration Division, including those responsible for budget formulation and execution, and officials at all eight U.S. division offices. In our interviews with division officials we used a common set of questions that focused on officials’ perspectives on the effects at the division level of performance-based budgeting, as compared to the previous budget formulation process. To examine the effects of the Corps’ budget formulation process, we also analyzed Corps budget and project data from fiscal years 2001 through 2010, the 5 years before and the 5 years after the implementation of performance-based budgeting. We did not review in detail the fiscal year 2011 budget for the Corps, as it was released after our audit work concluded, though we did examine it for key changes from the previous year. We reviewed the metrics and measures used to rank Corps projects and how they have changed since fiscal year 2006. We examined Corps guidance on calculating the benefit-cost ratio (BCR) of projects and interviewed Corps officials about the BCR and other metrics used in budgeting and the related effects on the budget request. In reviewing budgeting metrics and rankings, we did not evaluate the accuracy of the Corps’ calculations for BCR or other metrics. Our recommendation related to the budget process is based on previous GAO work which identified leading practices for performance-based budgeting. To evaluate how the President’s budget request for the Corps is presented, we reviewed budget presentation materials, including the President’s budgets and appendices and the budget justifications and Press Books from fiscal years 2006 through 2010. We also reviewed the Corps’ Five Year Development Plans for fiscal years 2007 through 2011, 2008 through 2012, and 2009 through 2013. We reviewed past GAO work on best practices of performance-based budgeting and examples of budget presentations for other agencies. To obtain input from users of the budget presentation for the Corps, we interviewed staff from relevant congressional committees. We also reviewed appropriations committee reports from fiscal years 2005 through 2010. We interviewed Corps and Office of Management and Budget (OMB) officials about the reasons for the structure and information provided in the budget presentation, and about the feasibility of making specific changes to it. Our recommendations related to budget presentation are based on information from users of the budget presentation, as well as previous GAO work on performance-based budgeting. We conducted this performance audit from March 2009 to March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to a U.S. Army Corps of Engineers (Corps) official, for the Construction account, projects are systematically classified into established categories and headquarters officials use specific metrics, outlined in the Budget Engineer Circular, to rank projects within these categories. Corps documentation shows that construction projects are ranked within seven categories: (1) dam safety assurance, seepage control, and static instability correction projects; (2) projects with mitigation or environmental requirements; (3) projects with substantial life-saving benefits; (4) high-performing ongoing projects; (5) high-performing new start projects; (6) qualifying ongoing projects with continuing contracts; and (7) projects scheduled to be completed in the fiscal year of the budget request. The primary metrics that are to be used to rank projects within each of these categories are listed in table 2, along with a breakdown of funding by project type in the fiscal year 2010 budget request for the Corps. However, according to Corps officials, the metrics alone do not always determine the priority given to a project in the budget request, as varying degrees of professional judgment are involved in ranking individual projects. For example, high-performing projects (excluding those related to ecosystem restoration) are ranked primarily using the benefit-cost ratio (BCR). Once a project’s BCR has been calculated, Corps officials have minimal discretion because, according to Corps and Office of Management and Budget (OMB) staff, OMB establishes minimum BCR thresholds and projects that do not meet the threshold cannot be included in the budget request in the high-performing category. On the other hand, while some metrics are applied to rankings of ecosystem restoration projects, a Corps official described these as more subjective. For example, a greater amount of professional judgment is used in evaluating the significance of one habitat against others. A Corps headquarters official told us that headquarters officials largely evaluate construction projects across categories on a case-by-case basis. Although performance-based budgeting has made ranking projects within categories more systematic, the Corps official added that professional judgment is still needed to compare projects across categories. For example, while formal written guidance documenting priorities across categories does not exist, dam safety projects are generally the highest priority among the project categories because these dams are already built and need to be maintained to provide continued protection to people living in the area. This is supported by Corps ranking data from the past 5 fiscal years, as the highest priority dam projects have generally been budgeted for the maximum amount of funding that the Corps determines can be effectively used. According to our analysis of Corps data, in most years since performance-based budgeting was begun, the funding requested for dam safety projects has been among the highest of the construction categories. In addition, the agency requests enough funding for projects with environmental or mitigation requirements to meet annual targets laid out in environmental plans. Other than dam safety and projects with environmental requirements, the Corps official could not generalize about the relative priority or level of funding requested for the remaining project categories, noting that they are decided on a case-by-case basis. A Corps official told us that administration priorities influence budget formulation, and may be communicated to the Corps through OMB’s written feedback on the budget submission or in a letter from the Assistant Secretary of the Army for Civil Works. For the Investigations account, information used to make budgetary decisions varies depending on the phase of the project—reconnaissance study, feasibility study, and preconstruction engineering and design. In the first two phases, a Corps headquarters official told us that the Corps relies primarily on professional judgment and other factors, but that by the last phase, data are available to guide decision making. More specifically, the first phase of an investigation is a reconnaissance study, which is conducted to understand the nature of a water resources problem and determine the federal government’s interest. To determine if a potential project warrants a reconnaissance study, he also stated that the headquarters business line managers meet with the Chief of Budget Development to discuss the merits of conducting the study. They make funding decisions based on a narrative description of the proposed study. At this point in the process, since the study is still prospective and there is no performance information available, agency officials rely primarily on their professional judgment, as they did prior to the use of performance- based budgeting. If the Corps determines through the reconnaissance study that there is a federal interest in the study, and local sponsors are available, as required by law, a feasibility study is conducted. This type of study is done to formulate and recommend specific solutions to a water resources problem. At the end of the feasibility study phase, performance information, such as BCR and returns for the environment, is available to inform decisions about which projects will move on to the final phase of the investigation, preconstruction engineering and design. Corps officials consider the same metrics used to evaluate construction projects since the purpose of this phase is to determine whether a project should be authorized for construction. For the Operation and Maintenance (O&M) account, the divisions have a greater role in selecting projects in certain funding increments. Although the budget formulation process for the O&M account is less centralized than it is for Construction and Investigations, Corps headquarters and division officials described how the process is more centralized than it was prior to the introduction of performance-based budgeting, when the divisions could largely distribute funding as they saw fit. Corps officials noted that for increments 1 and 2, the highest priority increments, Corps division officials identify critical projects, equaling up to 75 percent of the average of their previous 5 years’ budget requests. According to the Budget Engineer Circular (EC), the first increment should represent critical routine projects, meaning projects that are done every year or on a cyclical basis, or projects that are required in order to meet legal and environmental requirements or for historic preservation. For example, the ongoing operation of a powerhouse and the biennial dredging of a channel could be included in this increment. The Budget EC notes that the second increment should also represent critical projects, though these do not take place on a regular basis. An example of this would be the replacement of a potable water well or a broken gate on a lock. In addition, a Corps official stated that business line managers at Corps headquarters provide oversight to ensure the divisions include projects in the first two increments that reflect Corps-wide priorities. They read the divisions’ narrative descriptions of how they plan to use the requested funding and what the consequences would be if the projects did not receive the funding. In addition, as stated in the Budget EC, the projects in increment 3—equaling up to 25 percent of the average of the previous 5 years’ budget requests for each division—are also considered critical, but are of lower priority than the first increments. A Corps official noted that headquarters officials play a greater role by evaluating increment 3 projects across divisions to determine which projects will be included in the ceiling level of funding. Unlike increments 1 and 2, in which the divisions can generally be assured of a certain level of funding, some increment 3 projects may only be funded if the Corps receives more than the ceiling level of funding. Finally, increments 4 and 5 are lower priority projects above the ceiling level of funding. The Corps’ Budget EC provides detailed guidance to divisions on the metrics that should be considered to determine which O&M projects and activities receive priority. Imminent risk to human life, court mandates, strategic importance to the Department of Defense, and the amount of commercial tonnage transported on a waterway are among the factors that would give a waterway higher priority status. For example, the Budget EC provides specific tonnage ranges to assess the relative levels of commerce on particular waterways. Thus, all else being equal, a project that is critical to the operation of a waterway with a high level of commercial tonnage will be given priority over a project that is equally critical to the operation of a waterway with a low level of commercial tonnage. The Budget EC also specifies that, even if waterways do not support a high level of commercial tonnage, they can be included in the budget request if they support significant commercial fishing and public transportation, or are subsistence harbors, which local communities depend on for survival, or harbors of refuge, which are protected from heavy seas. Although the Budget EC provides guidance on the metrics that should be considered in determining which projects and activities receive priority, the Corps does not have formal guidance for making trade-off decisions while formulating the budget across Construction, Investigations, O&M, and the other accounts. According to a Corps official, however, the agency does have an informal process for making these trade-off decisions. First, the Corps headquarters business line managers and the Chief of Budget Development meet to consider the construction and investigations projects in the prior year’s budget request. The goal is to maintain continuity of ongoing projects, provided they still meet the performance criteria, so these projects’ minimum needs are met. Next, the managers of the nonproject-based accounts and the Chief of Budget Development consider these accounts, including General Expenses and the Regulatory Program, and determine what it would take to maintain the existing level of service. Then, with the remaining funds, headquarters business line managers and the Chief of Budget Development consider O&M projects above increments 1 and 2, since these initial increments are included in the ceiling level of funding. Finally, they consider high-performing new start construction projects after the ceiling level of funding has been reached. Benefit-cost ratio (BCR) is calculated differently for various types of projects, but generally represents the value of damages avoided as a result of constructing a project, divided by the life-cycle cost of the project for the U.S. Army Corps of Engineers (Corps). Table 3 summarizes the primary inputs used to calculate benefits. Table 4 shows a simplified example of how BCR would be calculated for two alternative construction projects aimed at reducing the transportation costs to users of a channel. The first example, channel deepening, would generate benefits due to several factors. First, a deeper channel would accommodate larger vessels, which are more efficient and have a lower per-unit cost. Additionally, vessels sometimes have to wait for tidal changes so that there is sufficient channel depth. Deepening the channel reduces or eliminates the need to do this and thus creates savings. Finally, if a channel is not deep enough to accommodate a vessel, the cargo must sometimes be transferred to a vessel with a more shallow draft. If deepening reduces or eliminates this need, cargo handling savings are created. The second example, channel widening, would generate benefits due to reductions in vessel delays. This would occur if the widening allowed more vessels to use the channel at one time. For example, the channel might currently only permit one-way vessel traffic, but the widening would allow two-way traffic. The reduction in delays generates savings. Similarly, sometimes weather-related factors such as fog require wider channels. If a wider channel permits increased vessel traffic during foggy conditions, savings are also generated. According to a Corps official, channel widening projects are typically less expensive than channel deepening projects, though their benefits also tend to be lower. The minimum BCR has been higher for new start construction projects than ongoing projects, reflecting the administration’s preference for fewer new start projects. Prior to fiscal year 2008, a different measure was used instead of BCR as the primary economic metric. Table 5 shows the changes in the BCR requirements over the past 5 years. Office of Management and Budget (OMB) staff stated they recommended changing the measure to create more stability. Nevertheless, according to Corps division officials, there is still uncertainty about whether particular projects will be included in the budget request. Since, according to Corps and OMB staff, the BCR threshold set by OMB can change from year to year, a project may meet the BCR threshold 1 year but fail to meet it in future years, making it difficult for the Corps to make long-term commitments. For example, one division cited a hydropower plant that had been funded since 2005, but was not included in the President’s 2010 budget request because it had a BCR of 1.7 and the BCR threshold for ongoing projects that year was 2.5. Officials at another division estimated that three to four projects in their jurisdiction had been put on hold since the introduction of performance-based budgeting because they could not meet the BCR threshold. Officials at some divisions told us that this uncertainty and the failure of a project to be budgeted has negatively affected the Corps’ relationship with local sponsors. Some division officials also told us that this increased uncertainty has made workforce planning more challenging. Over the past decade the number of projects included in the budget request for the U.S. Army Corps of Engineers (Corps) has varied. The number of construction projects has in general decreased, though it has been more stable in recent years, as shown in figure 4. The number of investigations projects included in the budget request has followed a similar trend to the Construction account, though the degree of the decrease over time has been greater, as shown in figure 5. Compared to the Construction and Investigations accounts, the Operation and Maintenance (O&M) account has been relatively stable, as shown in figure 6. 1. While we agree that the Corps’ current processes may incorporate project review findings, we continue to believe that establishing a documented process for the use of such information in the Corps’ budget formulation would ensure that the Corps routinely makes the best use of all available information. Additionally, having a documented process would improve understanding of how information from the review boards shapes program priorities and affects decision making. Moreover, our report discusses the Corps’ use of information on project progress, such as whether schedule and budgetary milestones are being met, through review boards at the district and division levels. However, according to Corps officials, this review board information affects funding decisions on a case-by-case rather than routine basis. 2. We have clarified in our report that we agree the Corps’ budget formulation process for the O&M account reflects actual performance. Nonetheless, we continue to believe that the overall emphasis of the Corps’ budget process is on anticipated rather than demonstrated performance. In addition to the individuals listed above, Carol M. Henn, Assistant Director; Vondalee R. Hunt, Assistant Director; Kathleen Padulchick; and Kelly A. Richburg made significant contributions to this report. Joshua Archambault, Virginia Chanley, Robert L. Gebhart, Chelsa Gurkin, Felicia Lopez, and Vasiliki Theodoropoulos also made key contributions.
The U.S. Army Corps of Engineers (Corps) is the world's largest public engineering, design, and construction management agency. In fiscal year 2006 it began incorporating performance information into its budget process, but Congress raised concerns that the criteria used by the Corps to prioritize projects are not transparent and the budget formulation process could achieve a higher return on investment. GAO was asked to (1) describe the information the Corps uses in its budget formulation process and the implications of the process, and (2) evaluate whether the President's recent budget requests for the Corps are presented so that agency priorities are clear and proposed use of funds transparent. GAO reviewed the Corps' internal budget guidance, documentation of its project rankings and budget formulation process, performance review materials, and budget presentation materials. GAO also interviewed Corps and Office of Management and Budget officials. With the introduction of performance-based budgeting in fiscal year 2006, the Corps began emphasizing projects with the highest anticipated returns on investment. Previously, Corps division officials sought to provide continued funding to all ongoing projects that fit within administration guidelines. Now, under the current process, Corps headquarters plays an increased role in selecting projects, and evaluates projects using certain performance metrics. The Corps gives priority to those projects with the highest anticipated returns for the economy and the environment, as well as those that reduce risk to human life. The Corps' use of performance metrics makes projects in certain geographic areas more likely to be included in the budget request. For example, the benefit-cost ratio, a measure of economic benefit that is used to rank certain projects, tends to favor areas with high property values. Another effect of the Corps' use of performance-based budgeting is that fewer construction and investigation projects--studies to determine whether the Corps should initiate construction projects--have been included in the budget request in recent years. In contrast, the number of projects in the Operation and Maintenance account has been relatively stable, which Corps officials attributed partially to its emphasis on routine activities. While the metrics used by the Corps in its budget formulation process focus on anticipated benefits, the Corps monitors the progress of ongoing projects through review boards at the headquarters, division, and district levels. However, the Corps does not have written guidance establishing a process for incorporating information on demonstrated performance, such as review board findings, into budget formulation decisions. In the absence of such a process, the Corps may miss opportunities to make the best use of this performance information. The budget presentation for the Corps lacks transparency on key elements of the budget request. It focuses on requested construction and investigations projects, but does not describe how the decisions made during the budget formulation process affected the budget request. For example, the budget presentation does not include an explanation of the relative priority given to project categories or how they are evaluated against each other. Also, while the number of construction and investigations projects receiving appropriations is typically much greater than the number requested, the budget presentation does not include detailed information on all projects with continuing resource needs. The budget presentation also lacks detail on the amount of the balance of unobligated appropriations (carryover) that remain available for each project. Users of the budget presentation told GAO that these two types of project information would be useful.
According to DOD, there were 287 aircraft in the OSA fleet as of May 2017. All OSA aircraft are military variants of commercial aircraft. (See appendix IV for more information and images of these aircraft.) Table 1 lists the number of OSA aircraft by DOD owner/operator and type of usage. A majority of the executive aircraft is located at Joint Base Andrews, Maryland, and a small number are located overseas, as shown in table 2. Thirteen of the 44 executive aircraft are designated as service secretary controlled aircraft. The primary mission for these 13 aircraft is to transport the military departments’ Secretaries, Chiefs of Staff, and other senior officials such as the Undersecretaries and Vice Chiefs of Staff. Service secretary controlled aircraft also support travel for members of congress and for White House support missions, including for cabinet-level officials. In addition, 9 of the 44 executive aircraft are designated for use by the Commanders of the Combatant Commands. As of May 2017, the Army owned and operated 121 of the 243 nonexecutive aircraft. U.S. Africa Command leased one aircraft, and the remaining 121 aircraft belonged to the U.S. Special Operations Command, Marine Corps, Navy, and Air Force, which respectively owned 21, 25, 33, and 42 aircraft. Of the Air Forces’ 42 nonexecutive aircraft, 18 were designated for Defense Intelligence Agency or Defense Security Cooperation Agency support to overseas locations. Multiple officials have responsibilities for approving the use of government aircraft and air travel. Specifically, the Secretaries of the Military Departments, the Chairman of the Joint Chiefs of Staff, and Combatant Commanders review and approve requests within their respective commands. In addition, the Office of the Secretary of Defense Executive Secretary and the Assistant Secretary of Defense for Legislative Affairs prioritize and approve requests within their approval authorities. Table 3 summarizes the responsibilities for approving requests for use of government aircraft. DOD guidance sets clear priorities for the use of its aircraft to support officials in certain positions within the department. Specifically, the guidance lists 26 required DOD users and 35 authorized DOD users of government aircraft who are categorized into four tiers. DOD’s highest priority (tier one) travelers are required to use government aircraft for both official and unofficial travel while tier two travelers are required to use government aircraft only for official travel. The Secretary of Defense prioritizes tier one and tier two officials as travelers that are required to use government aircraft because there is a continuous requirement for secure communications; a threat exists that could endanger lives; or there is a need to satisfy exceptional scheduling requirements that make commercial transportation unacceptable. DOD’s tier three and four travelers are not required to use government aircraft, but are authorized to use the aircraft for official travel when the demands of their travel prevent the use of commercial aircraft. DOD’s aircraft are also used to support employees and members of Congress and White House support missions—a user pool which could total over 550 users. Table 4 lists the required and authorized users of DOD’s aircraft. White House support missions are trips provided by DOD and directed by the President such as travel for cabinet-level officials, the Vice-President, and First Lady. DOD guidance supports travel for members and employees of Congress when approved by the Assistant Secretary of Defense for Legislative Affairs. DOD supports travel for congressional users when the purpose of travel is related to DOD programs or activities. DOD guidance does not support travel for congressional users if a commercial flight is able to meet the users’ departure and arrival requirements within a 24-hour period. However, if the trip includes unusual circumstances, such as a clear and present danger or other compelling operational considerations that make commercial transportation unacceptable, then congressional users may use military aircraft. During calendar years 2014 through 2015, government and military officials took more than 19,000 flights on OSA executive aircraft, and our review of a nongeneralizable, random sample of flight packages from calendar years 2014 and 2015 found that DOD generally followed its guidance for approving use of these aircraft for these selected flights. We analyzed the data for executive flights conducted during calendars years 2014 and 2015, and found that there were a total of 19,752 flights. In both calendar years most of the flights were flown by authorized, but not required users, as shown in Table 5. During calendar years 2014 and 2015, DOD’s four tier one required users accounted for 4 percent of the total executive flights. Its 22 required tier two users accounted for another 27 percent of the total. The remaining 69 percent of the flights were taken by the hundreds of personnel who were authorized, but not required, to use DOD’s executive aircraft. As indicated in table 5, 29 percent of the total executive flight data was categorized in non-specific terms such as “below tier 2 user”. Consequently, it is not possible to provide exact percentages of flight usage for each of the different authorized user categorizes. However, based on the 2014 and 2015 authorized user flight data that was specifically categorized, the following users accounted for at least these percentage of the total flights: tier-three users, 21 percent; tier-four users, 1 percent; White House support mission users, 12 percent; and Congressional delegations, 5 percent. According to DOD officials, most of the non-specific flights were for tier-three-and-four users, but some of the flights were for White House support missions or Congressional delegations. Additional analyses of the calendar year 2014 and 2015 flight data showed that usage rates varied both within and across months and years. For example, in calendar year 2014, the number of daily flights ranged from a low of 2 flights on December 30, 2014, to a high of 70 flights on December 8, 2014. In calendar year 2015, the low ranged from 2 flights on April 5, 2015, to a high of 51 flights on December 18, 2015, and March 31, 2015. During calendar years 2014 and 2015, the executive flights went to over 1,000 locations, and the ten most visited destinations accounted for about 40 percent of those flights. Those locations included Joint Base Andrews in Maryland; Ramstein Air Base in Germany; MacDill Air Force Base in Florida; Scott Air Force Base in Illinois; and Stuttgart Airport in Germany. DOD officials told us that they often see high demand for executive aircraft to support congressional users during congressional recess periods. We found that users in all categories, including congressional, White House support missions, as well as required DOD, and all other authorized users, flew on most days during the 2014 and 2015 two-week spring and four to five-week summer congressional recess periods. However, when we analyzed the total numbers of daily flights, we found that most of days with 40 or more executive flights occurred outside the congressional recess period in calendar year 2014 or 2015, and only one day—during the 2015 spring recess, occurred within those periods. In total, during calendar year 2014, there were 71 days where OSA aircraft flew 40 or more flights, and in 2015 there were 23 days with 40 or more flights. DOD officials told us that they generally prioritize White House support missions and congressional users over tier three or four authorized users. However, at times they are unable to accommodate congressional travelers due to number of participants and distance of travel requirements. DOD officials stated that congressional requests often include larger participant sizes and overseas travel; however, there are a limited number of executive aircraft that can accommodate larger groups and fly long distances. DOD guidance specifies that large executive aircraft (i.e., capable of carrying 15 or more passengers) will be approved only for groups of 5 or more members of Congress. The Air Force has two types of large executive aircraft (C-40s and C-32s). In calendar years 2014 and 2015, we found that 76 percent of the congressional user flights used C-40 executive aircraft, which seat up to 36 passengers and can fly 5,000 nautical miles without refueling. In 2007, DOD established the Executive Airlift Scheduling Activity to facilitate sharing of executive aircraft among the military services and combatant commands when requests exceed capacity. Multiple officials from the services and DOD’s components told us that this sharing approach generally works, and that services usually agree to use their service secretary controlled aircraft to fly officials outside of their service, when asked. Our analysis showed that service secretary controlled aircraft accounted for 13 percent of the executive flights in 2014, and 15 percent of the executive flights in 2015. Table 6 shows that while service secretary controlled aircraft generally flew users from within their associated service, approximately 18 percent of service secretary controlled flights supported users from outside their service. DOD Generally Followed Its Approval Guidance for a Select Set of Executive Flights Reviewed We analyzed flight request packages from a nongeneralizable, random sample of 53 executive flights taken during calendar years 2014 through 2015 and, consistent with a report we issued in 2014, we found that for these select flights DOD generally followed its guidance for approving executive aircraft use. The Secretaries of the Military Departments and Combatant Commanders review and approve OSA aircraft requests within their respective Departments and Commands. The Air Force Deputy Chief of Staff for Operations, Plans and Requirements is also responsible for scheduling congressional and White House support missions. DOD guidance defines a list of procedures for approving the use of OSA. As shown in Figure 1, DOD guidance requires each OSA flight request package to contain specific information, such as the name and rank of traveler, itinerary, cost comparison if needed, and appropriate signatures. Although some packages were missing items, we did not find evidence to suggest the requested flight should have been disapproved. Specifically, 51 of the 53 flight request packages listed the names and titles or ranks of travelers, and 43 of the 53 flight request packages included a signed request. In addition, 42 out of the 53 flight request packages included the purpose of travel and 40 out of the 53 flight request packages included the senior DOD traveling official’s signature certifying use of the aircraft. We also found that 24 out of the 53 flight request packages were for authorized users. The request packages for authorized users are required to include additional information and we found that most of the packages included most of the information. For example, all of the 24 packages included the military department or agency of travelers, and 20 of 24 packages included a statement that the travel requirements of DOD guidance have been met. In addition, 17 of 24 packages included documentation such as an explanation as to why scheduling requirements could not be changed to permit the use of commercial air, and a justification to include a statement of commercial air costs. We discussed any items that were missing with DOD officials and did not find evidence to suggest the requested flights should have been disapproved. In recent years, DOD has implemented a consistent process to validate the size of its OSA fleet. The process results in a general determination of the sufficiency of the OSA inventory, and the 2015 and 2016 determinations were expressed in terms of risks to mission accomplishment. The services do not generally use the validation process determinations as a basis for their OSA aircraft procurement and divestment decisions. In 2011, guided by a memorandum from the Vice Chairman of the Joint Chiefs of Staff and DOD Instruction 4500.43, Operational Support Airlift, DOD began to implement a structured, repeatable approach to validate its OSA fleet on an annual basis, to comply with Office of Management and Budget guidance. The Vice Chairman’s memorandum established working and steering groups to provide input and oversight to the process and the instruction laid out many of the details of the new fleet validation process. The instruction assigned DOD and service officials a variety of responsibilities with regard to OSA aircraft. For example, in addressing the need to gain efficiencies by sharing aircraft and flight data across the department, it specifically instructed each of the military department secretaries and the combatant commanders to budget for the costs of their OSA aircraft, and to manage those aircraft as required to maximize wartime readiness, efficiency, cost-effectiveness and peacetime utilization. DOD Instruction 4500.43 noted that DOD is required to conduct validations of its OSA aircraft inventory and requirements to determine the sufficiency of the fleet, and it instructed the Chairman of the Joint Chiefs of Staff to conduct an annual OSA aircraft review and to provide the results to the Secretary of Defense. The instruction lists a wide range of requirements that the fleet validation is to be based upon. These include: peacetime engagement and support; travel for members of Congress; travel for DOD’s required-use travelers; and a range of wartime requirements associated with contingency scenarios, specific contingency plans and concepts of operation, steady-state campaigns, posture planning efforts, and general and direct support. The OSA validation process begins when the Joint Staff and an independent contractor collect and analyze data from the services and combatant commands. The results of the analysis are then presented to the working and steering groups, and may go through additional reviews before the process concludes with a memorandum from the Chairman of the Joint Chiefs of Staff to the Secretary of Defense, which validates the fleet and addresses risk. Figure 2 shows the full extent of the process, and shows that some steps can be omitted if there are no disagreements or contentious issues to resolve. In recent years, the OSA validation process has resulted in a general determination of the sufficiency of the OSA fleet to meet requirements. This determination has been reported annually in memorandums from the Chairman of the Joint Chiefs of Staff to the Secretary of Defense. Throughout the validation process, the OSA aircraft inventory is compared to a broad set of requirements. Some of the anticipated future requirements are estimated based on historical aircraft usage rates. For example, executive aircraft requirements are estimated based on the past usage of executive aircraft by DOD’s required users, Congressional users, and White House directed travelers. Because the usage of these travelers can vary from year to year, the Chairmen’s memorandums refer to the requirements as estimates, and consequently, address the ability of the OSA fleet to meet these requirements in general terms. Table 7 shows that the size of the OSA fleet has declined each year since 2013, as well as the Chairmen’s assessments. The Chairman’s validation memorandums for 2015 and 2016 expressed the sufficiency of the OSA fleet in terms of risk, based on a risk matrix developed by the OSA working group in 2014. The OSA risk matrix categorizes risks as low, moderate, significant, or high based on the percentage of days DOD expects to be able to conduct all its required missions. In Table 8, we have converted the working group’s risk-level percentages to numbers of days in a year. The Chairman’s 2016 OSA fleet validation memorandum indicated that with a fleet of 45 executive aircraft the risk to mission accomplishment was moderate; and the mission risk for the nonexecutive fleet of 256 aircraft was low. Based on the risk matrix, this means DOD should be able to meet all of the flight requests for tier one and tier two users, White House directed travelers, and Congressional members and delegations, on between 336 and 343 days of the year. Stated more simply, those users can expect a shortage of available executive aircraft to affect some travel plans between 22 and 29 days of the year. Similarly, DOD expects that its nonexecutive aircraft may be unable to meet some mission requirement on up to 22 days of the year. The Chairmen’s validation memorandums are generally not used to support OSA aircraft procurement or divestment decisions. As previously noted, OSA guidance instructs the military department secretaries and the combatant commanders to budget for the costs of their OSA aircraft, and to manage those aircraft as required to maximize wartime readiness, efficiency, cost-effectiveness and peacetime utilization. Service officials told us their decisions to divest or replace OSA aircraft are generally made based on internal service assessments concerning the age and maintenance condition of the aircraft, and the need to balance OSA aircraft requirements against other service priorities. For example, Navy and Army officials said that they retired C-20 aircraft because the aircraft were old and expensive to maintain. On April 8, 2014, when General Dempsey issued the first annual OSA validation memorandum, he included an attachment which showed that the services and U.S. Special Operations Command had programed reductions of 68 aircraft and he recommended the exact same reduction in the size of the OSA fleet (from 344 to 276 aircraft) through fiscal year 2019. The subsequent validation memorandums, for the 2014, 2015, and 2016 fleets, did not contain any specific recommendations for force structure changes, but each memorandum noted that the services were continuing to identify efficiencies and program reductions in their OSA fleets. We are not making recommendations in this report. DOD provided technical comments on a draft of this report, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Military Departments, and the Commandant of the Marine Corps; the Commander of the United States Transportation Command; and other interested parties. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or merrittz@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The Department of Defense (DOD) defines an aircraft mishap as an event resulting in death, injury, illness or property damage. DOD guidance defines four categories or classes of mishaps according to severity: Class A: Mishap resulted in a fatality, a permanent total disability, damage equal to or greater than $2 million, or a destroyed aircraft. Class B: Mishap resulted in a permanent partial disability, damage equal or greater than $500,000, but less than $2 million, or hospitalization for inpatient care of three or more individuals (not including observation or diagnostic care). Class C: Mishap resulted in a nonfatal injury or occupational illness that caused loss of one or more days from work not including the day or shift it occurred, or damage equal or greater to $50,000, but less than $500,000. Class D: Mishap resulted in a recordable injury or illness not otherwise classified as Class A, B, or C, or damage equal or greater to $20,000, but less than $50,000. While any reportable mishap is, by definition, a matter of concern, the rate of mishaps per flight (or sortie) is low for OSA aircraft. Complete historical flight data is not available for the OSA fleet. However, a calculation based on an extrapolation that uses the average sortie numbers from the 2016 OSA validation process analysis would yield a rate of 7 mishaps for every 10,000 sorties. Furthermore, DOD classifies its class A and B mishaps as its serious mishaps, and these mishaps accounted for 22 of the 174 total OSA mishaps from fiscal years 2007 through 2016. See Table 9 for the complete mishap data for that period. Appendix II: Information on Maintenance of Operational Support Airlift (OSA) Based on our interviews with service maintenance officials and analysis of service maintenance data, we found that: Contractors perform almost all executive and nonexecutive OSA aircraft maintenance, including most organizational-level and depot- level maintenance. In some instances, one contract covers aircraft from more than one service. For example, the Air Force manages the support contract for the Air Force, Army, and Navy C-37 aircraft. All the services require that maintenance for these aircraft comply with Federal Aviation Administration standards for similar types of commercial aircraft. Many of the OSA aircraft are more than 20-years-old. Upcoming depot maintenance and modification periods will reduce the availability of 2 or 3 C-32 and 2 or 3 C-40B executive aircraft. According to Air Force officials, various capability upgrades and modifications will be made so the planes can continue to meet Federal Aviation Administration standards and customer requirements. Each of the maintenance and modification actions is scheduled to take between 10 days and 9 months. In response to this situation, in June 2016, the Executive Secretary sent a memorandum to the military departments, the Assistant Secretary of Defense for Legislative Affairs, and others in DOD, with a copy to the Director of the White House Military Office, alerting them that availability of these larger capacity aircraft will be limited until 2018. The memorandum also asked the addressees to be prepared for flight cancellations due to short notice higher priority missions and to always have commercial air transportation planned as a backup. As shown in tables 10 (Air Force), 11 (Army), and 12 (Navy), many OSA aircraft are over 20 years old and have availability or mission capable rates around or above 70 percent. However, some OSA aircraft have lower availability rates, such as the Air Force C-37 (58 percent) and the Army C-37 (65 percent). To examine the extent to which DOD used executive aircraft, and the extent the usage for select flights complied with guidance, we identified and reviewed the guidance DOD and its components have in place to approve the use executive aircraft. Additionally, we interviewed officials from the Joint Staff, U.S. Transportation Command, the military services, the Office of the Secretary of Defense Executive Secretary, and the Office of the Assistant Secretary of Defense for Legislative Affairs and discussed their roles in approving the use of executive aircraft. Because there was no central source for current and historical executive flight data, we obtained the most current data available—calendar year 2014 and 2015 executive aircraft flight data—from both the Joint Staff and the military services. We then analyzed the data to identify the portions of the total flights that supported various categories of DOD and non-DOD travelers. We also analyzed the calendar year 2014 and 2015 flight data by type of aircraft used. We also analyzed a nongeneralizable, random sample of 53 executive OSA flights and compared the documentation in the flight request packages to the flight package documentation requirements listed in DOD’s OSA guidance. The initial scope for the sample included all 19,752 flights conducted during calendar years 2014 through 2015. We received flight information from the services that indicated departure and arrival dates, departure and arrival locations, tiers of travelers, and service of travelers. Based on this, we compiled a unique list of flights, organized by flight type (Joint Staff or service secretary aircraft) and tier (tiers one, two, three, and four; below tier 2; congressional delegation; White House; and other). We then created 9 sampling strata as described in table 13, and distributed a sample of 100 flights proportionally across the strata where the sample was designed to achieve overall 95 percent confidence intervals within +/- 10 percentage points of an attribute estimate. In February 2017, we delivered the selected sample of 100 flights to the respective services with a request for the associated flight packages. However, we ultimately restricted our analyses to a select subset of 53 nongeneralizable flight packages for several reasons. Specifically, upon requesting flight packages, we learned that certain types of flights, for example, congressional user and combatant command flights were not collected by the services. As a result, it was determined the delivered select set of 53 flight packages could not be generalized to all flights in calendar years 2014-2015 due to the limited scope of the sampled flights with available packages and due to the small sample size, which would not provide precise estimates. We concluded that the data provided by the Joint Staff and the military services were sufficiently reliable for the purposes of our reporting objectives by interviewing each of the services and the Joint Staff about their databases used to enter and maintain flight data. To examine the process DOD uses to validate its OSA fleet size and the extent to which the process results have influenced force structure decisions, we gathered documentation on DOD’s annual OSA validation process. The documentation we reviewed included DOD guidance, meeting minutes, briefings, and a methodology paper. We also analyzed the results of the OSA fleet validation process. Since 2014, the Chairman of the Joint Chiefs of Staff has presented these results to the Secretary of Defense in an annual OSA fleet validation memorandum. We also interviewed officials from the military services, Joint Staff, the U.S. Transportation Command, and a private contractor who supported the OSA validation process, to determine their roles in the annual validation process and to identify how the process results are used. We also discussed the basis for OSA force structure decisions with officials from the military services. We conducted this performance audit from June 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 14 through 22 show some key facts about DOD’s different types of OSA aircraft, along with a picture of each type of aircraft. In addition to the contact named above, the following staff members made key contributions to this report: Michael Ferren, Assistant Director, Brenda M. Waterfield, David M. Ballard, Vincent M. Buquicchio, Patricia F. Donahue, Amie Lesser, Marc Meyer, Dan Royer, Leigh Ann Sheffield, and Sonya L. Vartivarian.
OSA missions support the movement of a limited number of high-priority passengers and cargo with time, place, or mission-sensitive requirements. DOD's OSA aircraft are variants of commercial aircraft. OSA aircraft are categorized as either executive (used to transport DOD, congressional, and cabinet officials) or non-executive (used to fulfill wartime or contingency needs). As of May 2017, DOD had 287 OSA aircraft—44 executive and 243 non-executive—about 6 percent of DOD's airlift/cargo/utility aircraft. House Report 114-537 and Senate Report 114-255 included provisions for GAO to review the use and size of the OSA fleet. This report examines the extent to which DOD (1) used OSA executive aircraft in 2014 and 2015, and if this usage complied with guidance; and (2) has a process to validate its OSA fleet size. GAO reviewed DOD guidance for approving the use of OSA aircraft, analyzed the most current executive aircraft flight data available—calendar years 2014 and 2015—and compared the approval documentation from a sample of those flights to DOD's guidance. GAO also reviewed documentation and interviewed officials to assess DOD's OSA validation process and results. In calendar years 2014 and 2015, government officials took thousands of flights on Operational Support Airlift (OSA) executive aircraft, and our review of a nongeneralizable sample of 53 flight packages found that those trips generally followed Department of Defense (DOD) guidance for requesting the use of government aircraft. DOD requires its officials in certain positions to fly on military aircraft, including OSA executive aircraft. It also authorizes, but does not require, officials in other government positions to fly on OSA executive aircraft. We analyzed the use of OSA executive aircraft during 2014 and 2015—the latest years for which data were available—and found that of the 19,752 executive flights conducted, 31 percent supported required users and 69 percent supported other authorized users. The Vice President, the First Lady, and other cabinet-level officials on White House support mission trips accounted for about 12 percent of the flights, and members of congress and congressional employees accounted for about 5 percent of the flights. DOD guidance requires documentation for each flight request including the rank or position of the traveler, itinerary, and in some cases, cost data. While not generalizable beyond these flights, our review of 53 flight request packages found that the packages generally contained most required documentation. Although some packages were missing items, we discussed those items with DOD officials, and we did not find evidence to suggest the requested flight should have been disapproved. In recent years, DOD has implemented a consistent process to validate the size of its OSA fleet and to have a risk assessment of the fleet's ability to meet requirements all 365 days per year. In 2016, for example, the executive fleet's risk-to-mission accomplishment was assessed as moderate, and the non-executive fleet's risk-to-mission was assessed as low. The services do not generally use the validation process determinations as a basis for OSA aircraft procurement and divestment decisions. According to service officials, those decisions are based on separate, independent evaluations of their force structure needs, which evaluate the age and maintenance conditions of their aircraft, and the need to balance OSA aircraft requirements against other service priorities. GAO is not making any recommendations in this report. DOD provided technical comments on a draft of this report, which GAO incorporated as appropriate.
STARS will replace controller workstations with new color displays, processors, and computer software at FAA and DOD terminal air traffic control facilities. (See fig. 1.) The total number of facilities scheduled to receive STARS has fluctuated between 70 and 190 because some of the facilities have received interim systems and may not get full STARS. FAA is designing STARS to provide a platform that allows easy and rapid incorporation of new hardware- and software-based tools to help improve controllers’ productivity and make the nation’s airspace safer and managed more efficiently. For each acquisition project that the agency undertakes, FAA officially estimates, or develops baselines for, the project’s life-cycle costs, schedule, benefits, and performance in a formal document called the acquisition program baseline. This baseline, which is approved by the Joint Resources Council, FAA’s acquisition decision-making body, is used to monitor a project’s progress in these four areas. The initial acquisition plan for STARS was approved in March 1996; and in September 1996, FAA signed a contract with Raytheon Corporation to acquire STARS. The initial strategy for STARS included two phases: (1) initial system capability, which was to provide the same functions as the equipment in use at the time and (2) final system capability, which was to implement new functions to help controllers move traffic more safely and efficiently. FAA’s acquisition policy requires that projects follow a structured and disciplined test and evaluation process appropriate to the product or facility being tested. Typically, this process includes system testing and field familiarization testing. System testing usually includes development and operational, production, and site acceptance testing. Field familiarization testing includes system and software testing in an operational environment to verify operational readiness. Raytheon and FAA have already conducted a series of tests of the STARS software and plan to continue such testing. As problems arise during these tests, they are documented using program trouble reports (PTR) and are classified from type 1, the most severe, to type 4, the least severe. FAA’s policy defines each type. The policy states that type-1 PTRs prevent the accomplishment of an operational or mission-essential capability and could jeopardize safety and security. Type-2 PTRs adversely affect but does not preclude the performance of an operational or mission-essential capability and a workaround solution is not available. Type-3 PTRs adversely affect but does not preclude the performance of an operational or mission-essential capability and a workaround solution is available. Type-4 PTRs prevent or adversely affect the accomplishment of a nonessential capability and can be handled procedurally. FAA’s contract with Raytheon calls for the contractor to correct all type-1 and type-2 PTRs and, as directed by the government, to correct type-3 and type-4 PTRs. The timing of the corrective action depends, in part, on the severity of the PTR and on its relevance to upcoming activities. From the project’s inception until 2001, a multidisciplinary team oversaw STARS and was responsible for carrying out the acquisition strategy for implementing the project. In November 2000, FAA began formulating a new organization that would be responsible for all terminal modernization activities. This new organization, the Terminal Business Service, was intended to move the agency from a project-driven to a point-of-service approach, which would address performance issues at each facility in an integrated fashion. This new organization is now responsible for the STARS program along with other projects for terminal facilities. The current STARS program is not the program that FAA contracted for in 1996. When FAA awarded the contract in September 1996, it estimated that STARS would cost $940 million and be implemented at 172 facilities by 2005. This estimate was based on acquiring STARS through a commercial off-the-shelf technology with limited development, since a version of this technology was already in use in other countries. In 1997, when FAA first introduced STARS, FAA controllers, who were accustomed to using the older equipment, began to voice concerns about computer-human interface issues that could hamper their ability to monitor air traffic. For example, the controllers noted that many features of the old equipment could be operated with knobs, allowing controllers to focus on the screen. By contrast, STARS was menu-driven and required the controllers to make several keystrokes and use a trackball, diverting their attention from the screen. The maintenance technicians also identified differences between STARS and its backup system that made it difficult to monitor the system. For example, the visual warning alarms and the color codes identifying problems were not consistent between the two systems. Addressing these and other issues required extensive software development because the commercial version of STARS that Raytheon delivered to FAA very tightly coupled the software for the information that would be displayed on the screen and the software that would calculate aircraft position. Because of this coupling, it was difficult for Raytheon to implement the new or modified display requirements that FAA had identified. Accordingly, FAA directed Raytheon to separate the display software from the air traffic control software so that Raytheon could more efficiently implement future display- and air traffic control-related changes to each type of software. To help ensure that STARS meets all of these and other requirements, FAA is developing multiple versions of STARS software, each with specific features, and plans to integrate them into a single version, which will be deployed nationwide. (See fig. 2.) This incremental approach, according to FAA, gives air traffic controllers early experience with the software as it is being developed rather than introducing an entirely new system at the end, as was the case with the commercially available version. For example, FAA has developed a version known as early display configuration, which would replace the controllers’ current displays and monitoring equipment but would use the existing computer and processing software. Figure 2 shows FAA’s new strategy for developing STARS software incrementally. In the early display configurations, FAA separated the display software from the original commercial version and installed and tested the display software, together with some of the original software, at El Paso and Syracuse. In the initial system configuration, FAA took the original software and added some air traffic control software and tested this software at Eglin Air Force Base. After each type of software was tested, FAA began combining the two types to run together in a version called full STARS 2. Subsequent versions of full STARS incorporate additional functions. Figure 3 provides the schedule for when each version of STARS became or is scheduled to become operational at the first facility. Since 1996, FAA acquisition executives have approved two changes to the cost and schedule estimates for STARS. These changes are presented in table 1. The October 1999 change was approved to give Raytheon enough time to add and modify the display software in order to resolve computer- human interface issues. The March 2002 change was approved after FAA decided to deploy STARS to facilities where frequent equipment failures caused delays; to new facilities; and to facilities where a digital radar, needed to operate STARS, is available. Under this strategy, FAA is also assessing how to deploy STARS to remaining facilities in a cost-effective manner. Facilities that previously received new hardware and software so that they could continue to operate while waiting for STARS would get new technology but may not get the full STARS system. FAA responded to the DOT IG’s concerns about the agency’s plans for deploying STARS at Philadelphia by stating that FAA plans to follow its policy for testing STARS and addressing critical software problems. However, FAA officials, controllers and maintenance technicians all have concerns about whether required training can be completed by the November 17, 2002, deployment date. In June 2002, the DOT IG questioned whether FAA’s commitment to deploy STARS in Philadelphia before testing it first in Memphis, as planned, would allow the agency to test the system adequately and address critical software problems that might be identified before deployment. While the Memphis terminal facility has fewer and less complex air traffic control operations than more congested facilities, such as the one in Philadelphia, FAA changed its plans because meeting the commitment to deploy STARS in Philadelphia would not allow enough time to test STARS first in Memphis. FAA testified in September 2001 that it would deploy STARS to Philadelphia to coincide with the opening of a new terminal, scheduled for November 17, 2002. FAA officials said they view the achievement of the November 17, 2002, deployment as important to the agency’s credibility and that they believe they will learn more from testing STARS in Philadelphia, which is more representative of terminal facilities, than they would have learned in Memphis. According to FAA, its plans for deploying STARS in Philadelphia are consistent with its testing policy, which calls for independent operational testing of a system after it has been deployed in one location. Under the current plan, FAA will use STARS to control live traffic at Philadelphia beginning on November 17, 2002—a step signifying initial operating capability—but the current air traffic control system will remain available as a backup. In accordance with its policy, the agency will then conduct independent testing after a “period of use,” scheduled from the day after initial operations through December 2002. At that point, as the policy directs, the agency will declare the system ready for operational use and will complete the switch to the new system. At that time, now scheduled for February 2003, the new system will be formally commissioned and the current system decommissioned. To address critical STARS software problems identified prior to deploying STARS, FAA is attempting to resolve the most critical problems (type-1 and type-2 PTRs) before November 17, 2002. According to FAA’s definition, type-1 problems are those that, if not corrected, might prevent the accomplishment of an operational or mission-essential capability or might jeopardize safety, while type-2 problems adversely affect but does not prevent the accomplishment of an operational or mission-critical capability. FAA’s data showed that as of August 30, 2002, there were 5 type-1 PTRs and 68 type-2 PTRs, against the system being deployed in Philadelphia, that still need to be resolved. FAA officials stated that they have assigned these problems to the contractor and plan to validate the contractor’s fixes. Validation is important because, in some instances, the fixes have not performed as intended. In addition, FAA has identified at least 12 type-3 PTRs and other issues, such as completing required training, that need to be resolved prior to deployment in Philadelphia. FAA is also meeting biweekly with Raytheon to monitor the contractor’s progress in implementing and testing fixes for PTRs. In addition, FAA has installed STARS hardware and an earlier version of STARS software at Philadelphia so that users can become familiar with the system. On September 19, 2002, FAA plans to begin testing the most recent STARS software in Philadelphia. While FAA maintains that its plans for testing STARS and addressing critical software problems are adequate to address the DOT IG’s concerns, the agency is less certain that it will be able to complete the certification training required for maintenance technicians at the Philadelphia terminal before the new version of STARS begins operation in November. The union representing maintenance technicians expressed concern because FAA has not yet finalized the content and schedule of the training for controllers and maintenance technicians on the software that will be deployed in Philadelphia. Under a new training agreement between the union and FAA, on-site certification training—rather than training at FAA’s central facility in Oklahoma City—is required for all employees before a new system begins operation. Union officials expressed concern that without a finalized training schedule, its members will not have enough time to receive training for certification before the November deployment. FAA officials acknowledged that having enough time for training is an issue. Union and FAA officials are working to solve these concerns prior to deployment. Moreover, according to FAA officials, FAA is meeting with maintenance technicians and controllers to discuss issues related to training, as well as maintenance and testing. Because FAA was not able to deploy STARS according to its original schedule, under which some terminals would have received the new equipment by 1998, FAA implemented several interim projects. Under these projects, FAA replaced failing equipment with new software, radar displays, and other hardware so that the terminals could continue operating while STARS was delayed. Under one project, Common Automated Radar Terminal System (Common ARTS), FAA procured common software for the automated equipment at some of its largest terminal facilities and about 130 smaller facilities. Common ARTS provides functions similar to those being designed for STARS, such as the ability to support simultaneous multiple radar displays and adapt to site changes. FAA also purchased 294 ARTS color displays, which replaced aging radar displays at six terminals with those that are high-resolution. The cost for Common ARTS and the ARTS color displays attributable to STARS delays was around $90.5 million. We provided a draft of this report to DOT. We met with DOT officials, including the Director, Terminal Business Service, FAA. These officials generally agreed with the facts and made technical and clarifying comments, which we have incorporated into this report as appropriate. FAA initially began the Common ARTS project because of delays in a program that preceded STARS. Under the initial phase of this project, developed by Lockheed Martin Corporation, equipment was delivered to 131 small- to medium-sized facilities beginning in 1997 and to 5 large facilities in 1998 and 1999. However, FAA later purchased equipment for five additional facilities, which was installed in 2001 and 2002. documentation on the prioritization of trouble reports and agency policy and guidance on critical trouble reports and test and evaluation requirements. To determine the impact of changes in the schedule for deploying STARS, we reviewed FAA documentation on the interim projects and the associated costs and also reviewed IG and GAO products on the impact of delays on implementing STARS. We did not independently verify the data we received from FAA. We performed our work in August 2002 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested Members of Congress, the Secretary of Transportation, and the Administrator, FAA. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3650. I can also be reached by E-mail at dillinghamg@gao.gov. Key contributors to this report are listed in appendix I. In addition to those individuals listed above, Nabajyoti Barkakati, Geraldine Beard, Elizabeth Eisenstadt, Tammi Nguyen, Madhav Panwar, and Glenda Wright made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Since September 1996, the Federal Aviation Administration (FAA) has been developing the Standard Terminal Automation Replacement System (STARS) project to replace the outdated computer equipment that air traffic controllers currently use in some facilities to control air traffic within 5 to 50 nautical miles of an airport. Comparing the currently projected cost and deployment schedule for STARS with the original cost and schedule is difficult because the program presently bears little resemblance to the program envisioned in 1996. FAA has officially changed the cost, schedule, and requirements for STARS twice. In October 1999, FAA estimated the cost for its new approach at $1.4 billion, with a schedule to begin deploying STARS in 2002 at 188 facilities and complete installation at all facilities by 2008. The second change occurred in March 2002, when FAA lowered its estimate from $1.4 billion to $1.33 billion, reduced the number of facilities receiving STARS from 188 to 74, and changed the date to complete installation at all facilities from 2008 to 2005. FAA responded to the Department of Transportation Inspector General's concerns about the agency's plans for deploying STARS in Philadelphia by stating that it plans to follow its policy for testing STARS and addressing critical software problems. Because the FAA changed the date for deploying STARS at the first facility from 1998 to 2002, it had to implement interim systems to allow it to continue to meet demands for air traffic services. Specifically, it had to replace radar displays and other hardware that were difficult to maintain and had limited capacity to accommodate software that would allow FAA to add new features. FAA documents show the cost to implement these interim solutions when STARS was delayed was $90.5 million.
Each fiscal year, the Millennium Challenge Act requires MCC to select countries as eligible for MCA assistance by identifying candidate countries, establishing an eligibility methodology, and making eligibility determinations. MCC evaluates eligible countries’ proposals and negotiates compacts, which must be approved by the MCC board. The Threshold Program assists countries that are not deemed eligible but show a commitment to MCA objectives. MCC is governed by a board of directors consisting of U.S. government and other representatives. For fiscal year 2004, the Millennium Challenge Act limited candidates to low-income countries—those with per capita incomes less than or equal to the International Development Association (IDA) cutoff for that year ($1,415)—that also were eligible for IDA assistance. This provision limited candidacy in the MCA’s first year to the poorest low-income countries. For fiscal year 2005, candidates were required only to have incomes less than or equal to the IDA ceiling for that year ($1,465). Additionally, for fiscal years 2004 and 2005, candidates could not be ineligible for U.S. economic assistance under the Foreign Assistance Act of 1961. (See app. II for a list of candidate countries for fiscal years 2004 and 2005.) The Millennium Challenge Act requires that the MCC board base its eligibility decisions, “to the maximum extent possible,” on objective and quantifiable indicators of a country’s demonstrated commitment to the criteria enumerated in the act. MCC selected its indicators based on their relationship to growth and poverty reduction, the number of countries they cover, their transparency and public availability, and their relative soundness and objectivity. For fiscal years 2004 and 2005, MCC’s process for determining country eligibility for MCA assistance had both a quantitative and a discretionary component (see fig. 1). MCC first identified candidate countries that performed above the median in relation to their peers on at least half of the quantitative indicators in each of the three policy categories—Ruling Justly, Investing in People, and Encouraging Economic Freedom—and above the median on the indicator for control of corruption. (See app. III for a table describing the indicators, listing their sources, and summarizing the methodologies on which they are based.) In addition, MCC considered other relevant information—in particular, whether countries that scored substantially below the median (at the 25th percentile or lower) on an indicator were addressing any shortcomings related to that indicator. MCC also considered supplemental information to address gaps, lags, or other data weaknesses as well as additional material information. Encouraging Economic Freedom 11. Country credit rating 12. One-year consumer 8. Public primary education spending (as a percent of GDP) 9. Public expenditure on health (as a percent of GDP) The Millennium Challenge Act requires that, within 5 days of the board’s eligibility determinations, the MCC Chief Executive Officer submit a report to congressional committees containing a list of the eligible countries and “a justification for such eligibility determination” and publish the report in the Federal Register. Eligible countries are invited to submit compact proposals, which are to be developed in consultation with members of civil society, including the private sector and NGOs. However, a country’s eligibility does not guarantee that MCC will sign and then fund a compact with that country. MCC is to sign compacts only with national governments. Under the act, the duration of compacts is limited to a maximum of 5 years; MCC expects to approve compacts with durations of 3 to 5 years. MCA funds are not earmarked for specific projects or countries, and money not obligated in the fiscal year for which it was appropriated can be used in subsequent fiscal years. For fiscal years 2004 and 2005, Congress has directed that MCC use its existing appropriations to fully fund a compact—that is, obligate the entire amount anticipated for the compact’s duration. Funding for compacts and the Threshold Program must be drawn from the appropriation for the fiscal year in which the country was eligible. MCC aims to be among the largest donors in recipient countries, which, according to MCC officials, creates incentive for eligible countries to “buy into” MCC’s principles of policy reform, sustainable economic growth, country partnership, and results. The Millennium Challenge Act authorizes a limited amount of assistance to certain candidate countries to help them become eligible for MCA assistance. These candidate countries must (1) meet the fiscal year 2004 or 2005 requirements for MCA candidacy and (2) demonstrate a significant commitment to meeting the act’s eligibility criteria but fail to meet those requirements. MCC has implemented these legislative provisions as its Threshold Program. Figure 2 compares features of MCC compact and Threshold Program assistance; appendix IV describes the Threshold Program. MCC has broad authority under the Millennium Challenge Act to enter into contracts and business relationships. The act establishes the MCC Board of Directors and assigns it a key decision-making role in the corporation’s activities, including those related to implementing the compact program. The act also makes provisions for the board to consult with Congress and provide general supervision of MCC’s IG. The board consists of the Secretary of State (Board Chair), the Secretary of the Treasury (Vice Chair), the USAID Administrator, and the U.S. Trade Representative, in addition to MCC’s Chief Executive Officer. The board has four other positions filled by Presidential appointment with the approval of the Senate. Two of these positions have been filled. (For a timeline of key events and milestones since MCC’s launch, see app. V.) For fiscal years 2004 and 2005, the MCC board based its determinations of countries’ eligibility on its quantitative indicator methodology as well on discretion. Although MCC published the countries’ indicator scores at its Web site, some of the indicator source data used to generate the scores were not readily available. Finally, we found that reliance on the indicators carried certain inherent limitations. MCC used the 16 quantitative indicators, as well as the discretion implicit in the Millennium Challenge Act, to select 17 countries as eligible for MCA compact assistance for fiscal years 2004 and 2005 (see fig. 3). Fiscal year 2004: In May 2004, the MCC board selected 16 countries as eligible for fiscal year 2004 funding. The countries deemed eligible include 13 that met the quantitative indicator criteria and 3 that did not (Bolivia, Georgia, and Mozambique). Another 6 countries met the criteria but were not deemed eligible. Fiscal year 2005: In October 2004, the MCC board selected 16 countries as eligible for fiscal year 2005 funding. The countries deemed eligible included 14 countries that met the indicator criteria and 2 countries that did not (Georgia and Mozambique). Ten countries met the criteria but were not deemed eligible. Fifteen of the 16 countries also had been deemed eligible for fiscal year 2004; the only new country was Morocco. MCC did not provide Congress its justifications for the 13 countries that met the indicator criteria but were not deemed eligible for fiscal years 2004 and 2005 (one of these countries, Tonga, did not score substantially below the median on any indicator). The act does not explicitly require MCC to include a justification to Congress for why these countries were not deemed eligible. In addition, our analysis of countries that met the indicator criteria but were not deemed eligible suggests that, besides requiring that a country score above the median on the indicator for control of corruption, MCC placed particular emphasis on three Ruling Justly indicators (political rights, civil liberties, and voice and accountability) in making its eligibility determinations. In fiscal years 2004 and 2005, 6 of the 13 countries that met the indicator criteria but were not deemed eligible had scores equal to or below the median on these three indicators. On the other hand, the 13 countries that were not deemed eligible performed similarly to the eligible countries on the other three Ruling Justly indicators—government effectiveness, rule of law, and control of corruption—as well as on the indicators for Investing in People and Encouraging Economic Freedom. Although MCC published its country scores for all of the indicators at its Web site, some of the indicator source data used to generate the scores were not readily available to the public. We found that source data for nine of the indicators were accessible via hyperlinks from MCC’s Web site, making it possible to compare those data with MCC’s published country scores. However, for the remaining seven indicators, we encountered obstacles to locating the source data, without which candidate countries and other interested parties would be unable to reproduce and verify MCC’s results. Primary education completion rates: The published indicators were created with data from several sources and years, and not all of these data were available on line. Primary education and health spending (percentage of gross domestic product): When national government data were unavailable, MCC used either country historical data or data from the World Bank to estimate current expenditures. Diphtheria and measles immunization rate: The general hyperlink at the MCC Web site did not link to the data files used to create the published indicators. One-year consumer price inflation: The published indicators were created with a mix of data from several data sources and different years. Fiscal policy: The published indicators were created with International Monetary Fund (IMF) data that are not publicly available. Days to start a business: Updated indicators were not published until after the board had made its fiscal year 2004 eligibility decisions. MCC’s use of the quantitative indicator criteria in the country selection process for fiscal years 2004 and 2005 involved the following inherent difficulties: Owing to measurement uncertainty, the scores of 17 countries may have been misclassified as above or below the median. In fiscal years 2004 and 2005, 7 countries did not meet the quantitative indicator criteria because of corruption scores below the median, but given measurement uncertainty their true scores may have been above the median. Likewise, 10 countries met the indicator criteria with corruption scores above the median, but their true scores may have been below the median. Missing data for the days to start a business and trade policy indicators reduced the number of countries that could achieve above-median scores for those indicators. For fiscal years 2004 and 2005, 20 and 22 countries, respectively, lacked data for the indicator for days to start a business, and 18 and 13 countries, respectively, lacked data for the trade policy indicator. Our analysis suggests that missing data for these two indicators may have reduced the number of countries that passed the Encouraging Economic Freedom category. The narrow and undifferentiated range of possible scores for the political rights, civil liberties, and trade policy indicators led to clustering—“bunching”—of scores around the median, making the scores less useful in distinguishing among countries’ performances. In fiscal year 2005, for example, 46 countries, or two-thirds of the countries with trade policy data, received a score of 4 (the median) or 5 (the lowest score possible) for trade policy. Our analysis suggests that bunching potentially reduced the number of countries that passed the Ruling Justly and Economic Freedom categories and limited MCC’s ability to determine whether countries performed substantially below their peers in affected indicators. With respect to the indicator for control of corruption, countries deemed eligible for MCA compact assistance represent the best performers among their peers; at the same time, studies have found that, in general, countries with low per capita income also score low on corruption indexes. Of the 17 MCA compact eligible countries, 11 ranked below the 50th percentile among the 195 countries rated by the World Bank Institute for control of corruption; none scored in the top third. MCC has received compact proposals, concept papers, or both, from 16 countries; of these, it has approved a compact with one country and is negotiating with four others. At the same time, MCC continues to refine its process for reviewing and assessing compact proposals. As part of this process, MCC has identified elements of country program implementation and fiscal accountability that can be adapted to eligible countries’ compact objectives and institutional capacities. Between August 2004 and March 2005, MCC received compact proposals, concept papers, or both, from 16 MCA compact-eligible countries, more than half of which submitted revised proposal drafts in response to MCC’s assessments. In March 2005, MCC approved a 4-year compact with Madagascar for $110 million to fund rural projects aimed at enhancing land titling and security, increasing financial sector competition, and improving agricultural production technologies and market capacity; MCC and Madagascar signed the compact on April 18, 2005. MCC is negotiating compacts with Cape Verde, Georgia, Honduras, and Nicaragua and is conducting in-depth assessments of proposals from two additional countries. Figure 4 summarizes the types of projects that eligible countries have proposed and that MCC is currently reviewing. The countries’ initial proposals and concept papers requested about $4.8 billion; those that MCC is currently reviewing (see fig. 4) and negotiating request approximately $3 billion over 3 to 5 years. Our analysis—based on MCC’s goal of being a top donor as well as Congress’s requirement that the corporation fund compacts in full—shows that the $2.4 billion available from fiscal year 2004 and 2005 appropriations will allow MCC to fund between 4 and 14 compacts, including Madagascar’s compact, for those years. MCC’s $110 million compact with Madagascar, averaging $27.5 million per year, would make it the country’s fifth largest donor (see app. VI for a list of the largest donors to MCA compact-eligible countries in fiscal years 2002-2003). As of April 2005, MCC is continuing to refine its process for developing compacts. According to MCC officials, the compact development process is open ended and characterized by ongoing discussions with eligible countries. According to a recent IG report, MCC’s negotiating a compact with Madagascar has served as a prototype for completing compacts with other countries. At present, the compact proposal development and assessment process follows four steps (see fig. 5). Step 1: Proposal development. MCC expects eligible countries to propose projects and program implementation structures, building on existing national economic development strategies. For instance, the Honduran government’s proposal is based on its Poverty Reduction Strategy Paper (PRSP) and a subsequent June 2004 implementation plan. MCC also requires that eligible countries use a broad-based consultative process to develop their proposals. MCC staff discuss the proposal with country officials during this phase of compact development. Although MCC does not intend to provide funding to countries for proposal development, some countries have received grants from regional organizations for proposal development. Step 2: Proposal submission and initial assessment. Eligible countries submit compact proposals or concept papers. MCC has not specified deadlines for proposal submission or publicly declared the limits or range of available funding for individual compacts. According to MCC officials, the absence of deadlines and funding parameters permits countries to take initiative in developing proposals. However, according to U.S.-based NGOs, the lack of deadlines has caused some uncertainty and confusion among eligible country officials. Honduran officials told us that knowing a range of potential funding would have enhanced their ability to develop a more focused proposal. During this stage, MCC conducts a preliminary assessment of the proposal, drawing on its staff, contractors, and employees of other U.S. government agencies. This assessment examines the potential impact of the proposal’s strategy for economic growth and poverty reduction, the consultative process used to develop the proposal, and the indicators for measuring progress toward the proposed goals. According to MCC, some eligible countries have moved quickly to develop their MCC programs. Others initially were unfamiliar with MCC’s approach and some faced institutional constraints. MCC works with these countries to develop programs that it can support. In addition, MCC is exploring ways—such as providing grants—to facilitate compact development and implementation. Once MCC staff determine that they have collected sufficient preliminary information, they seek the approval of MCC’s Investment Committee to conduct a more detailed analysis, known as due diligence. Step 3: Detailed proposal assessment and negotiation. MCC’s due diligence review includes an analysis of the proposed program’s objectives and its costs relative to potential economic benefits. Among other things, the review also examines the proposal’s plans for program implementation, including monitoring and evaluation; for fiscal accountability; and for coordination with USAID and other donors. In addition, the review considers the country’s commitment to MCC eligibility criteria and legal considerations pertaining to the program’s implementation. During their review, MCC staff seek the approval of the Investment Committee to notify Congress that the corporation intends to initiate compact negotiations; following completion of the review, MCC staff request the committee’s approval to enter compact negotiations. When the negotiations have been concluded, the Investment Committee decides whether to approve submission of the compact text to the MCC board. Step 4: Board review and compact signing. The MCC board reviews the compact draft. Before the compact can be signed and funds obligated, the board must approve the draft and MCC must notify appropriate congressional committees of its intention to obligate funds. MCC has identified several broadly defined elements of program implementation and fiscal accountability that it considers essential to ensuring achievement of compact goals and proper use of MCC funds. As signatories to the compact, MCC and the country government will be fundamental elements of this framework. However, MCC and eligible countries can adapt other elements (see fig. 6) by assigning roles and responsibilities to governmental and other entities according to the countries’ compact objectives and institutional capacities. Madagascar’s compact incorporates these elements in addition to an advisory council composed of private sector and civil society representatives, as well as local and regional government officials. The compact also requires that MCA-Madagascar, the oversight entity, adopt additional plans and agreements before funds can be disbursed, including plans for fiscal accountability and procurement. In addition, the compact requires the adoption of a monitoring and evaluation plan; provides a description of the plan’s required elements; and establishes performance indicators for each of Madagascar’s three program objectives, which are linked to measures of the program’s expected overall impact on economic growth and poverty reduction. MCC expects to disburse funds in tranches as it approves Madagascar’s completed plans and agreements. According to the IG, MCC officials expect to make the initial disbursements within 2 months after signing the compact. MCC has received advice and support from USAID, State, Treasury, and USTR and has signed agreements with five U.S. agencies for program implementation and technical assistance. In addition, MCC is consulting with other donors in Washington, D.C., and in the field to use existing donor expertise. MCC is also consulting with U.S.-based NGOs as part of its domestic outreach effort; however, some NGOs raised questions about the involvement of civil society groups. (See app. VII for more details of MCC’s coordination efforts.) MCC initially coordinated primarily with U.S. agencies on its board and is expanding its coordination efforts to leverage the expertise of other agencies. USAID and the Department of State in Washington, D.C., and in compact-eligible countries, have facilitated meetings between MCC officials and donors and representatives of the private sector and NGOs in eligible countries. In addition, several of the six USAID missions contacted by GAO reported that their staff had provided country-specific information, had observed MCC-related meetings between civil society organizations and governments, or had informed other donors about MCC. MCC has also coordinated with the Department of the Treasury and USTR. For example, according to MCC officials, MCC has regularly briefed these agencies on specific elements of compact proposals and established an interagency working group to discuss compact-related legal issues. Since October 2004, MCC has expanded its coordination through formal agreements with five U.S. agencies, including the Census Bureau, Army Corps of Engineers, and Department of Agriculture, that are not on the MCC board. MCC has obligated more than $6 million for programmatic and technical assistance through these agreements, as shown in figure 7. MCC has received information and expertise from key multilateral and bilateral donors in the United States and eligible countries. For example, World Bank staff have briefed MCC regarding eligible countries, and officials from the Inter-American Development Bank said that they have provided MCC with infrastructure assessments in Honduras. According to MCC, most donor coordination is expected to occur in eligible countries rather than at the headquarters level. In some cases, MCC is directly coordinating its efforts with other donors through existing mechanisms, such as a G-17 donor group in Honduras. In addition to soliciting donor input, MCC officials have encouraged donors not to displace assistance to countries that receive MCA funding. Donors in Honduras told us that MCA funding to that country is unlikely to reduce their investment, because sectors included in the country’s proposal have additional needs that would not be met by MCA. According to MCC officials, MCC is holding monthly meetings with a U.S.- based NGO working group and hosted five public meetings in 2004 in Washington, D.C, as part of its domestic outreach efforts. The NGOs have shared expertise in monitoring and evaluation and have offered suggestions that contributed to the modification of 1 of MCC’s 16 quantitative indicators. In addition, MCC has met with local NGOs during country visits. Some U.S-based NGOs have raised questions about the involvement of NGOs in this country and of civil society groups in compact-eligible countries. Environmental NGOs told us in January 2005 that MCC had not engaged with them since initial outreach meetings; however, MCC subsequently invited NGOs and other interested entities to submit proposals for a quantitative indicator of a country’s natural resources management. Representatives of several NGOs commented that MCC lacks in-house expertise and staff to monitor and assess civil society participation in compact development. In addition, U.S.-based NGOs expressed concern that their peers in MCA countries have not received complete information about the proposal development process. Since starting up operations, MCC has made progress in developing key administrative infrastructures that support its program implementation. MCC has also made progress in establishing corporatewide structures for accountability, governance, internal control, and human capital management, including establishing an audit and review capability through its IG, adopting bylaws, providing ethics training to employees, and expanding its permanent full-time staff. However, MCC has not yet completed plans, strategies, and time frames needed to establish these essential management structures on a corporatewide basis. (See fig. 8 for a detailed summary of MCC’s progress.) During its first 15 months, MCC management focused its efforts on establishing essential administrative infrastructures—the basic systems and resources needed to set up and support its operations—which also contribute to developing a culture of accountability and control. In February 2004, MCC acquired temporary offices in Arlington, Virginia, and began working to acquire a permanent location. In addition, consistent with its goal of a lean corporate structure with a limited number of full- time employees, MCC outsourced administrative aspects of its accounting, information technology, travel, and human resource functions. Further, MCC implemented various other administrative policies and procedures to provide operating guidance to staff and enhance MCC’s internal control. MCC management continues to develop other corporate policies and procedures, including policies that will supplement federal travel and acquisition regulations. Accountability requires that a government organization effectively demonstrate, internally and externally, that its resources are managed properly and used in compliance with laws and regulations and that its programs are achieving their intended goals and outcomes and are being provided efficiently and effectively. Important for organizational accountability are effective strategic and performance planning and reporting processes that establish, measure, and report an organization’s progress in fulfilling its mission and meeting its goals. External oversight and audit processes provide another key element of accountability. During its initial 15 months, MCC developed and communicated to the public its mission, the basic tenets of its corporate vision, and key program-related decisions by the MCC board. MCC began its strategic planning process when key staff met in January 2005 to begin setting strategic objectives and it expects to issue the completed plan in the coming months. In addition, MCC arranged with its IG for the audit of its initial year financial statements (completed by an independent public accounting firm) and for two program-related IG reviews. However, to date, MCC has not completed a strategic plan or established specific implementation time frames. In addition, MCC has not yet established annual performance plans, which would facilitate its monitoring of progress toward strategic and annual performance goals and outcomes and its reporting on such progress internally and externally. According to MCC officials, MCC intends to complete its comprehensive strategic and performance plans by the end of fiscal year 2005. Corporate governance can be viewed as the formation and execution of collective policies and oversight mechanisms to establish and maintain a sustainable and accountable organization while achieving its mission and demonstrating stewardship over its resources. Generally, an organization’s board of directors has a key role in corporate governance through its oversight of executive management, corporate strategies, risk management and audit and assurance processes, and communications with corporate stakeholders. During its initial 15 months, the MCC board adopted bylaws regarding board composition and powers, meetings, voting, fiscal oversight, and the duties and responsibilities of corporate officers and oversaw management’s efforts to design and implement the compact program. According to MCC, during a recent meeting of the board to discuss corporate governance, the Chief Executive Officer solicited feedback from the board regarding defining and improving the governance process. MCC’s board established a compensation committee in March 2005, and a charter for the committee is being drafted. In addition, MCC is preparing, for board consideration, a policy on the board’s corporate governance. As drafted, the policy identifies the board’s statutory and other responsibilities, elements of board governance, rules and procedures for board decision-making, and guidelines for MCC’s communications with the board. With regard to MCC board membership, seven of the nine board members have been appointed and installed. Through board agency staff, MCC staff have regularly informed board members—four of whom are heads of other agencies or departments—about pending MCC matters. The board has not completed a comprehensive strategy or plan for carrying out its responsibility—specifically, it has not defined the board’s and management’s respective roles in formulating and executing of corporate strategies, developing risk management and audit and assurance processes, and communicating and coordinating with corporate stakeholders. Moreover, although the bylaws permit the board to establish an audit committee—to support the board in accounting and financial reporting matters; determine the adequacy of MCC’s administrative and financial controls; and direct the corporation’s audit function, which is provided by the IG and its external auditor—the board has not yet done so. Finally, two of the MCC board’s four other positions have not yet been filled. Internal control provides reasonable assurance that key management objectives—efficiency and effectiveness of operations, reliability of financial reporting, and compliance with applicable laws and regulations— are being achieved. Generally, a corporatewide internal control strategy is designed to create and maintain an environment that sets a positive and supportive attitude toward internal control and conscientious management; assess, on an ongoing basis, the risks facing the corporation and its programs from both external and internal sources; implement efficient control activities and procedures intended to effectively manage and mitigate areas of significant risk; monitor and test control activities and procedures on an ongoing basis; assess the operating effectiveness of internal control and report and address any weaknesses. During its first 15 months, MCC took several actions that contributed to establishing effective internal control. Although it did not conduct its own assessment of internal control, MCC management relied on the results of the IG reviews and external financial audit to support its conclusion that key internal controls were valid and reliable. Further, MCC implemented processes for identifying eligible countries and internal controls through its due diligence reviews of proposed compacts, establishment of the Investment Committee to assist MCC staff in negotiating and reviewing compact proposals, and the board’s involvement in approving negotiated compacts. In addition, MCC instituted an Ethics Program, covering employees as well as outside board members, to provide initial ethics orientation training for new hires and regularly scheduled briefings for employees on standards of conduct and statutory rules. In April 2005, MCC officials informed us that they had recently established an internal controls strategy group to identify internal control activities to be implemented over the next year, reflecting their awareness of the need to focus MCC’s efforts on the highest-risk areas. However, MCC has not completed a comprehensive strategy and related time frames for ensuring the proper design and incorporation of internal control into MCC’s corporatewide program and administrative operations. For example, MCC intends to rely on contractors for a number of operational and administrative services; however, this strategy will require special consideration in its design and implementation of specific internal controls. Cornerstones of human capital management include leadership; strategic human capital planning; acquiring, developing, and retaining talent; and building a results-oriented culture. In its initial year, MCC human capital efforts focused primarily on establishing an organizational structure and recruiting employees necessary to support program design and implementation and corporate administrative operations (see app. VIII for a diagram of MCC’s organizational structure). MCC set short- and longer- term hiring targets, including assigning about 20 employees—depending on the number and types of compacts that have been signed—to work in MCA compact-eligible countries; it also identified needed positions and future staffing levels through December 2005 based on its initial operations. With the help of an international recruiting firm, MCC expanded its permanent full-time staff from 7 staff employees in April 2004 to 107 employees in April 2005; it intends to employ no more than 200 permanent full-time employees by December 2005 (see fig. 9). In addition, MCC hired 15 individuals on detail, under personal services contracts, or as temporary hires, as well as a number of consultants. Finally, in January 2005, MCC hired a consultant to design a compensation program to provide employees with pay and performance incentives and competitive benefits, including performance awards and bonuses, retention incentives, and student loan repayments. MCC officials told us that they intend the program to be comparable with those of federal financial agencies, international financial institutions, and multilateral and private sector organizations. Fifteen of these positions are administratively determined; Congress authorized 30 such positions for MCC in the Millennium Challenge Act. In its first 15 months, MCC took important actions to design and implement the compact program—making eligibility determinations, defining its compact development process, and coordinating and establishing working agreements with key stakeholders. MCC also acted to establish important elements of a corporatewide management structure needed to support its mission and operations, including some key internal controls. However, MCC has not yet fully developed plans that define the comprehensive actions needed to establish key components of an effective management structure. We believe that, to continue to grow into a viable and sustainable entity, MCC needs to approve plans with related time frames that identify the actions required to build a corporatewide foundation for accountability, internal control, and human capital management and begin implementing these plans. In addition, MCC’s board needs to define its responsibilities for corporate governance and oversight of MCC and develop plans or strategies for carrying them out. As MCC moves into its second year of operations, it recognizes the need to develop comprehensive plans and strategies in each of these areas. Implementation of such plans and strategies should enable MCC’s management and board to measure progress in achieving corporate goals and objectives and demonstrate its accountability and control to Congress and the public. As part of our ongoing work for your committee, we will continue to monitor MCC’s efforts in these areas. We recommend that the Chief Executive Officer of the Millennium Challenge Corporation complete the development and implementation of overall plans and related time frames for actions needed to establish 1. Corporatewide accountability, including implementing a strategic plan, establishing annual performance plans and goals, using performance measures to monitor progress in meeting both strategic and annual performance goals, and reporting internally and externally on its progress in meeting its strategic and annual performance goals. 2. Effective internal control over MCC’s program and administrative operations, including establishing a positive and supportive internal control environment; a process for ongoing risk assessment; control activities and procedures for reducing risk, such as measures to mitigate risk associated with contracted operational and administrative services; ongoing monitoring and periodic testing of control activities; and a process for assessing and reporting on the effectiveness of internal controls and addressing any weaknesses identified. 3. An effective human capital infrastructure, including a thorough and systematic assessment of the staffing requirements and critical skills needed to carry out MCC’s mission; a plan to acquire, develop, and retain talent that is aligned with the corporation’s strategic goals; and a performance management system linking compensation to employee contributions toward the achievement of MCC’s mission and goals. We recommend that the Secretary of State, in her capacity as Chair of the MCC Board of Directors, ensure that the board considers and defines the scope of its responsibilities with respect to corporate governance and oversight of MCC and develop an overall plan or strategy, with related time frames, for carrying out these responsibilities. In doing so, the board should consider, in addition to its statutory responsibilities, other corporate governance and oversight responsibilities commonly associated with sound and effective corporate governance practices, including oversight of the formulation and execution of corporate strategies, risk management and audit and assurance processes, and communication and coordination with corporate stakeholders. MCC provided technical comments on a draft of this statement and agreed to take our recommendations under consideration; we addressed MCC’s comments in the text as appropriate. We also provided the Departments of State and Treasury, the U.S. Agency for International Development, and the Office of the U.S. Trade Representative an opportunity to review a draft of this statement for technical accuracy. State and USAID suggested no changes, and Treasury and USTR provided a few technical comments, which we incorporated as appropriate. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call David Gootnick at (202) 512-4128 or Phillip Herr at (202) 512-8509. Other key contributors to this statement were Todd M. Anderson, Beverly Bendekgey, David Dornisch, Etana Finkler, Ernie Jackson, Debra Johnson, Joy Labez, Reid Lowe, David Merrill, John Reilly, Michael Rohrback, Mona Sehgal, and R.G. Steinman. We reviewed MCC’s activities in its first 15 months of operations, specifically its (1) process for determining country eligibility for fiscal years 2004 and 2005, (2) progress in developing compacts, (3) coordination with key stakeholders, and (4) establishment of management structures and accountability mechanisms. To examine MCC’s country selection process, we analyzed candidate countries’ scores for the 16 quantitative indicators for fiscal years 2004 and 2005, as well as the selection criteria for the fiscal year 2004 Threshold Program. We used these data to determine the characteristics of countries that met and did not meet the indicator criteria and to assess the extent to which MCC relied on country scores for eligibility determination. We also reviewed the source data for the indicator scores posted on MCC’s Web site to identify issues related to public access and to determine whether we could reproduce the country scores from the source data. Our review of the source data methodology, as well as the documents of other experts, allowed us to identify some limitations of the indicator criteria used in the country selection process. For these and other data we used in our analyses, we examined, as appropriate, the reliability of the data through interviews with MCC officials responsible for the data, document reviews, and reviews of data collection and methodology made available by the authors. We determined the data to be reliable for the purposes of this study. To describe MCC’s process for developing compacts, including plans for monitoring and evaluation, we reviewed MCC’s draft or finalized documents outlining compact proposal guidance, compact proposal assessment, and fiscal accountability elements. We reviewed eligible countries’ compact proposals and concept papers to identify proposed projects, funding, and institutional frameworks, among other things. To summarize the projects that countries have proposed and that MCC is currently assessing, we developed categories and conducted an analysis of countries’ proposal documents and MCC’s internal summaries. We also reviewed Madagascar’s draft compact to identify projects, funding, and framework for program implementation and fiscal accountability. We met with MCC officials to obtain updates on the compact development process. In addition, we interviewed representatives of nongovernmental organizations (NGOs) in Washington, D.C., and Honduras, as well as country officials in Honduras, to obtain their perspectives on MCC’s compact development process. To assess MCC’s coordination with key stakeholders, we reviewed interagency agreements to identify the types of formal assistance that MCC is seeking from U.S. agencies and the funding that MCC has set aside for this purpose. We also reviewed MCC documents to identify the organizations, including other donors, with which MCC has consulted. In addition, we interviewed MCC officials regarding their coordination with various stakeholders. We met with officials from the U.S. agencies on the MCC board (Departments of State and Treasury, USAID, and USTR) to assess the types of assistance that these agencies have provided to MCC. We also contacted six USAID missions in compact-eligible countries to obtain information on MCC coordination with U.S. agencies in the field. To assess MCC’s coordination with NGOs and other donors, we met with several NGOs, including InterAction, the World Wildlife Fund, and the Women’s Edge Coalition in Washington, D.C., and local NGOs in Honduras; we also met with officials from the Inter-American Development Bank in Washington, D.C., and Honduras, as well as officials from the World Bank, Central American Bank for Economic Integration, and several bilateral donors in Honduras. Finally, we attended several MCC public outreach meetings in Washington, D.C. To analyze MCC’s progress in establishing management structures and accountability mechanisms, we interviewed MCC senior management and reviewed available documents to identify the management and accountability plans that MCC had developed or was planning to develop. We reviewed audit reports by the USAID Office of the Inspector General to avoid duplication of efforts. We used relevant GAO reports and widely used standards and best practices, as applicable, to determine criteria for assessing MCC’s progress on management issues as well as to suggest best practices to MCC in relevant areas. Although our analysis included gaining an understanding of MCC’s actions related to establishing internal control, we did not evaluate the design and operating effectiveness of internal control at MCC. In January 2005, we conducted fieldwork in Honduras, one of four countries with which MCC had entered into negotiations at that time, to assess MCC’s procedures for conducting compact proposal due diligence and its coordination with U.S. agencies, local NGOs, Honduran government officials, and other donors. In conducting our field work, we met with U.S. mission officials, Honduran government officials, donor representatives, and local NGOs. We also visited some existing USAID projects in the agricultural sector that were similar to projects that Honduras proposed. We provided a draft of this statement to MCC, and we have incorporated technical comments where appropriate. We also provided a draft of this statement to the Departments of State and Treasury, USAID, and USTR; State and USAID suggested no changes, and Treasury and USTR provided technical comments, which we addressed as appropriate. We conducted our work between April 2004 and April 2005, in accordance with generally accepted government auditing standards. (continued) Lesotho Madagascar Malawi Mali Mauritania Moldova Mongolia Morocco Mozambique Nepal Nicaragua Niger Nigeria Pakistan Papua New Guinea Paraguay Philippines Rwanda São Tomé and Principe Senegal Sierra Leone Solomon Islands Sri Lanka Swaziland Tajikistan Tanzania Togo *Tonga Turkmenistan Uganda Ukraine Vanuatu Vietnam Yemen Republic Zambia * Candidate for FY 2004 only. ** Prohibited under Foreign Assistance Act in FY 2004 but not in FY 2005. Table 1 lists each of the indicators used in the MCA compact and threshold country selection process, along with its source and a brief description of the indicator and the methodology on which it is based. Since announcing the 16 quantitative indicators that it used to determine country eligibility for fiscal year 2004, MCC made two changes for fiscal year 2005 and is exploring further changes for fiscal year 2006. To better capture the gender concerns specified in the Millennium Challenge Act, MCC substituted “girls’ primary education completion rate” for “primary education completion rate.” It also lowered the ceiling for the inflation rate indicator from 20 to 15 percent. In addition, to satisfy the act’s stipulation that MCC use objective and quantifiable indicators to evaluate a country’s commitment to economic policies that promote sustainable natural resource management, MCC held a public session on February 28, 2005, to launch the process of identifying such an indicator. MCC expects to complete the process by May 2005. The MCC board used objective criteria (a rules-based methodology) and exercised discretion to select the threshold countries (see fig. 10). For fiscal year 2004, the MCC board relied on objective criteria in selecting as Threshold Program candidates countries that needed to improve in 2 or fewer of the 16 quantitative indicators used to determine MCA eligibility. (That is, by improving in two or fewer indicators, the country would score above the median on half of the indicators in each policy category, would score above the median on the corruption indicator, and would not score substantially below the median on any indicator.) MCC identified 15 countries that met its stated criteria and selected 7 countries to apply for Threshold Program assistance. Our analysis suggests that one of these seven countries did not meet MCC’s stated Threshold Program criteria. The MCC board also exercised discretion in assessing whether countries that passed this screen also demonstrated a commitment to undertake policy reforms to improve in deficient indicators. For fiscal year 2005, the MCC did not employ a rules-based methodology for selecting Threshold Program candidates. Instead, the board selected Threshold Program and MCA compact-eligible countries simultaneously. The board selected 12 countries to apply for Threshold Program assistance, including reconfirming the selection of 6 countries that also had qualified for the fiscal year 2004 Threshold Program. Figure 11 illustrates key events and defining actions relating to MCC since the passage of the Millennium Challenge Act in January 2004. MCC plans to be among the top donors in MCA compact-eligible countries. Figure 12 shows the total official development assistance net (average for 2002 and 2003) provided by the top three donors as well as the amount of total official development assistance net (average for 2002 and 2003) provided by all donors in each of the MCA compact-eligible countries. As the figure indicates, based on the average for the years 2002-2003, the United States was the top donor in Armenia, Bolivia, Georgia, and Honduras and was among the top five donors in nine additional countries. MCC is coordinating its program and funding activities with various stakeholders to keep them informed and to utilize their expertise or resources at headquarters and in the field (see fig. 13). In addition, several U.S. agencies have taken steps to coordinate their activities with MCC. Within each of the eight functional areas shown in figure 14, the actual staffing level as of April 2005 appears in the pie chart in each box and the planned staffing level by December 2005 appears in the right corner of each box.
In January 2004, Congress established the Millennium Challenge Corporation (MCC) to administer the Millennium Challenge Account. MCC's mission is to promote economic growth and reduce extreme poverty in developing countries. The act requires MCC to rely to the maximum extent possible on quantitative criteria in determining countries' eligibility for assistance. MCC will provide assistance primarily through compacts--agreements with country governments. MCC aims to be one of the top donors in countries with which it signs compacts. For fiscal years 2004 and 2005, Congress appropriated nearly $2.5 billion for the Millennium Challenge Corporation; for fiscal year 2006, the President is requesting $3 billion. GAO was asked to monitor MCC's (1) process for determining country eligibility, (2) progress in developing compacts, (3) coordination with key stakeholders, and (4) establishment of management structures and accountability mechanisms. For fiscal years 2004 and 2005, the MCC board used the quantitative criteria as well as judgment in determining 17 countries to be eligible for MCA compacts. Although MCC chose the indicators based in part on their public availability, our analysis showed that not all of the source data for the indicators were readily accessible. In addition, we found that reliance on the indicators carried certain inherent limitations, such as measurement uncertainty. Between August 2004 and March 2005, MCC received compact proposals, concept papers, or both, from 16 eligible countries. It signed a compact with Madagascar in April 2005 and is negotiating compacts with four countries. MCC's 4-year compact with Madagascar for $110 million would make it the country's fifth largest donor. MCC is continuing to refine its compact development process. In addition, MCC has identified elements of program implementation and fiscal accountability that can be adapted to eligible countries' compact objectives and institutional capacities. MCC is taking steps to coordinate with key stakeholders to use existing expertise and conduct outreach. The U.S. agencies on the MCC Board of Directors--USAID, the Departments of State and Treasury, and the Office of the U.S. Trade Representative--have provided resources and other assistance to MCC, and five U.S. agencies have agreed to provide technical assistance. Bilateral and multilateral donors are providing information and expertise. MCC is also consulting with nongovernmental organizations in the United States and abroad as part of its outreach activities. MCC has made progress in developing key administrative infrastructures that support its mission and operations. MCC has also made progress in establishing corporatewide structures for accountability, governance, internal control, and human capital management, including establishing an audit capability through its Inspector General, adopting bylaws, providing ethics training to employees, and expanding its permanent full-time staff. However, MCC has not yet completed comprehensive plans, strategies, and related time frames for establishing these essential management structures and accountability mechanisms on a corporatewide basis.
The National US&R task forces are designed to assist state and local governments in responding to structural collapse incidents and in conducting search and rescue operations. When a state requests federal search and rescue assistance, FEMA program managers identify one or multiple task forces for deployment and issue an activation order. Once a task force has been activated, all team members are to report to their point of departure within 4 hours if traveling by ground and within 6 hours if traveling by air. Urban Search and Rescue Exercise-April 2015 sponsored by Virginia Task Force 1 From April 24th-25th FEMA’s Virginia Task Force 1 hosted a full-scale training exercise in Lorton, VA. Maryland Task Force 1, Pennsylvania Task Force 1, members of foreign US&R teams from Chile, Mexico, Argentina, and Peru, and USAID’s Office of Foreign Disaster Assistance also participated. The exercise began w ith a simulation of an international deployment to a natural disaster. The exercise involved 62 role players reacting to multiple training scenarios including an apartment building collapse, a parking garage collapse, and a highw ay bridge collapse. The exercise was designed to practice and evaluate the deployment of a heavy US&R team in a field setting, the set-up and management of base operations, and coordination w ith the Department of Homeland Security and others. Support Function 9 designates FEMA as the federal coordinating agency for search and rescue operations, with support from the U.S. Coast Guard, the National Park Service, and the Department of Defense. Following the 1995 bombing of the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma the US&R program added the ability to field task forces for National Special Security Events and first exercised this ability at the 1996 Atlanta Olympics and the 1997 presidential inauguration in Washington D.C. After the terrorist attacks of September 11, 2001 (9/11), the US&R program developed operational capabilities for chemical, biological, radiological, nuclear, and explosive environments. In addition to receiving annual appropriations, in 2002, Congress appropriated $54 million to the task forces, as part of a supplemental appropriation. Between 1992 and 2014, the US&R task forces have deployed to 77 events (see appendix II for the list of events), including the attacks on 9/11, Hurricane Katrina, Hurricane Sandy, the earthquake in Haiti, and prepositioning for National Special Security Events. The US&R program includes 28 US&R task forces across the United States, as shown in figure 1. Each task force is sponsored by an emergency management response agency, typically a fire department. Task force members are career or volunteer first responders, such as canine handlers, physicians, and structural engineers. Every task force has up to 210 members that are to be capable of arriving on scene at a disaster within 16 hours of notification. The task forces are designed with “three-deep” rosters, meaning they strive to have at least three people to fill each staff position on the roster. The positions on the roster differ based on the type of team. Type 1 teams have 70 personnel with a full equipment cache and the capacity to respond to hazardous materials (Hazmat) and chemical, biological, radiological, nuclear, and explosive incidents. Type 3 teams have 28 personnel and a smaller equipment cache that is primarily designed to response to weather-driven disasters. Each US&R task force is to maintain a cache of equipment in eight categories: communications, Hazmat, logistics, medical, planning, rescue, technical, and water, as shown in figure 2. The cache requirements are uniform for each task force, a fact that promotes interoperability among task forces when they are working together on a deployment. Each task force maintains a cache of more than 2,000 types of items and there may be more than one of each item. For example, hydrogen peroxide is one item on the cache list used for wound care, but the team is required to carry 10 units, or bottles, of that item. The items in each category range from small, easy to transport items like hand held radios and hammers, to large items that require special transportation, such as water rescue boats. Some items require replacement after use, like bandages, while others should last years, like a canine kennel. Some items in the cache are low-cost and routinely replaced, while others require regular maintenance and are costly to replace. For example, the 2014 unit price of the Raker Shore System, a pneumatic powered tool that is used for rescue, is $16,832, while aspirin used for medical treatment costs $0.03 per tablet. The US&R program is managed by FEMA’s Operations Division’s US&R Branch of the Response Directorate. The US&R Branch develops policies, procedures, and guidance for the US&R program and is available to provide technical assistance to task forces. In accordance with guidance from the US&R program’s Strategic Group, FEMA allocates a portion of its annual appropriation to each task force for training exercises, equipment acquisition and maintenance, program management, and other support functions. Annual costs are funded through readiness cooperative agreements between FEMA and each of the 28 task force sponsoring agencies. Between fiscal year 2010 and fiscal year 2014, each US&R task force received an average of $1.1 million per year in cooperative agreement funds. Task forces’ disaster- specific costs are funded through the Disaster Relief Fund. Between fiscal year 2010 and fiscal year 2014 the US&R task forces received approximately $25 million in reimbursements from the Disaster Relief Fund. When states request federal assistance and the President declares a major disaster, IMATs must arrive at the affected state or jurisdiction within 12 hours. IMATs are made up of FEMA emergency management staff in areas such as operations, logistics, planning, and finance and administration. The IMAT program includes 3 national teams and 13 regional teams across FEMA’s 10 regions. (See appendix III for a map with IMAT locations). National IMAT teams typically respond to Level I catastrophic events which require significant federal assistance and coordination in response and recovery. Regional teams typically respond to Level II and III incidents that may require a high or moderate amount of federal assistance. IMATs can also provide assistance in events that are not disasters, such as National Special Security Events. For example, the national IMAT teams assisted the Centers for Disease Control and Prevention in the Ebola response as well as providing support during the influx of unaccompanied minors in 2014. Regional IMAT teams have provided support at the Democratic and Republican National Conventions and Superbowl, as well as the United Nations African Leaders Summit in Washington, D.C. Oso, Washington Mudslide On Saturday, March 22, 2014 at 10:45 am a large and unprecedented landslide occurred north of the Stillaguamish River, along State Route 530 in Washington State, tw o miles east of the small tow n of Oso. The slide w as massive, covering 6,000 feet of highw ay, destroying over 50 homes, blocking the North Fork of the Stillaguamish River, and creating a dam w hich then formed a lake w here none existed before. On this day, there w ere 59 people at home w hen the slide hit. Only 16 survived this horrific event that occurred with no w arning. The local and regional community responded and initiated immediate lifesaving and incident command operations. Affecting numerous rescues and rapidly determining the size, scale and complexity of the devastation. Additional requests for assistance were initiated from local to State and the Federal Government. FEMA deployed resources to the incident to include its Urban Search Rescue (US&R) Task Force and its 20 Canine Search Teams - Human Remains Detection Teams; and a National Incident Management Assistance Team among other resources. required to respond to the incident. The IMAT may support first responders in providing shelter, emergency food and supplies, and restoration of government services. IMAT team members may also help state and local officials in obtaining temporary housing or counseling for disaster victims and providing estimates for replacement of damaged infrastructure. IMAT teams have responded to a range of disasters including Hurricanes Isaac and Sandy in 2012; the 2013 floods in Colorado; and the 2014 mudslide in Oso, Washington. The IMAT program was historically staffed by permanent full time FEMA employees, whose salaries and benefits were supplemented by funds from the disaster relief fund for expenses when the teams were deployed to specific disasters. After FEMA used nearly all of its IMAT teams in response to Hurricane Sandy, the agency increased the number of IMAT staff. FEMA increased total program staffing by replacing its 97 permanent full time employees in fiscal year 2010 with 255 new CORE positions for fiscal year 2015. CORE IMAT employees are hired on 4-year contracts, and the positions may be renewed if there is ongoing disaster work and funding is available. Under the new team composition, the 3 national IMAT teams grew from 16-member teams staffed by permanent full time employees to 32-member teams staffed by CORE employees. At the same time, the regional IMAT teams grew from teams of 4 permanent full time employees to teams of 12 CORE employees (See appendix IV for national and regional IMAT position organizational charts). In establishing the CORE teams, FEMA shifted all program funding to the disaster relief fund and program expenditures increased from approximately $13 million in fiscal year 2010 to $35 million obligated from the disaster relief fund for fiscal year 2015 (to include all salaries and benefits and available program costs, but not including disaster-specific costs). (See appendix V for detailed IMAT positions and program funding.) As of July 2015, all IMATs have transitioned to the new CORE teams. Figure 3 shows the evolution of FEMA’s incident response teams and changes in IMAT size and composition since 2006. Emergency evacuations are the responsibility of state and local governments. However, FEMA is responsible for providing direction, guidance, and technical assistance on state and local evacuation plans that contain integrated information on transportation operations, shelters, and other elements of a successful evacuation. FEMA provides evacuation support and response through its Office of Response and Recovery primarily through three programs: the National Hurricane Program, the National Evacuations Contracts, and NMETS, a database tool that is intended to support state transportation-assisted evacuees and facilitate data sharing among declared and host states or jurisdictions. In fiscal year 2007, FEMA developed NMETS with initial program funding of $2 million. NMETS is designed to assist state and local officials in registering persons, pets, and personal property requiring government- assisted evacuation in response to a disaster or impending disaster; identifying their individual needs; accounting for them as they move through embarkation, and debarkation; and connecting them with other family members, pets, and personal items. During evacuation, electronic barcodes link all household members and their possessions and key information collected consists of name, date of birth, gender, pre- evacuation address, family members, medical needs or equipment, and service animals. FEMA uses leading program management practices for goal setting, communication, and program execution to provide urban search and rescue services for a wide variety of disasters. Goal setting: FEMA has ensured that the mission of the US&R program aligns with the goals and resources of the program. The US&R Strategic Plan outlines six mission goals including response, readiness, communication, collaboration, accountability, and implementation of the US&R strategic training plan. US&R program officers established specific objectives, strategies, and performance measures to support the goals. For example, one goal is to save lives and protect property in an all-hazards environment. An objective supporting this goal is refinement of the structural collapse mission. In an effort to achieve this objective, the US&R program plans to develop, review, and update deployment concepts of operations for potential secondary missions such as Hazmat and human remains detection canine missions. State emergency managers we spoke with said that during a disaster, US&R task forces will do whatever is needed to achieve their mission. During recent disasters, this has meant that the US&R task forces provided assistance beyond traditional structural collapse operations. For example, in response to Hurricane Sandy, task forces were needed to provide humanitarian assistance in conducting wellness checks in affected neighborhoods. After the mudslide in Oso, Washington, US&R task forces were needed to conduct canine human remains detection searches, and the task forces deployed 22 canine units. The clear mission, objectives, and strategies set by the program gives the task forces the authority to take action to save lives and help the task forces achieve their overarching mission. This alignment of agency mission with strategic goals and resources is a leading practice for effective program management. Communication: FEMA communicates US&R program risks and performance issues through the US&R Advisory Organization. The advisory organization is composed of senior members or specialists from the 28 task forces. When the task forces raise an issue to the advisory organization, it is to assign subgroups to examine the issue in order to propose a solution or course of action. For example, at a September 2015 meeting of the organization, the Logistics sub-group briefed the advisory organization on a plan they are developing to reduce the equipment cache to ensure they are able to rapidly respond to incidents. The advisory organization maintains an Action Tracker List with priority issues that the group addresses. A majority of the task forces (6 of the 9 we contacted) said the advisory organization was an effective mechanism for collaboration and communication and addressing challenges within the US&R program. Creating a venue for communicating program risks and uncertainties and addressing issues that arise during the course of program performance is another leading practice for effective program management. Program execution: Each of the 28 US&R task forces uses the same operations manual, which outlines procedures for task force activation, operation in the field, and demobilization. All 9 task forces we interviewed reported that they rely on the operations manual as a reference to conduct task force operations. In addition, each of the 28 US&R task forces is governed by a similar cooperative agreement between its sponsoring agency and FEMA and the members of each task force must meet the same training standards and carry the same equipment cache. This uniformity in management of the task forces promotes interoperability and reliability, both for task forces collaborating in a disaster response and for states anticipating US&R assistance after a request to FEMA. This is also a leading practice for effective program management. FEMA officials cited several benefits of the US&R program. For example, they said the US&R program is more cost-efficient than a full-time federal US&R resource. They estimated that, in order to staff three shifts (or 24 hour coverage) of an equivalent, federally-maintained 70 member US&R team, it would cost $22.7 million per task force. In comparison, the fiscal year 2014 budget for the US&R program (all 28 task forces) was approximately $35 million. They also said that US&R sponsoring agencies benefit from sponsoring a task force because the training and equipment they receive is often valuable for their primary function as a fire department or emergency response agency. Eight out of nine state emergency managers we interviewed expressed a positive opinion of the US&R program and said that they would request search and rescue assistance from FEMA if it was ever needed. None of the state emergency managers we spoke with identified challenges or issues in requesting US&R assistance from FEMA. We also found that FEMA uses leading program management practices for conducting periodic reviews based on program standards to assess the US&R program. FEMA uses after action reports (AAR), administrative readiness evaluations, and the Operational Readiness Exercise Evaluation Program to assess the US&R program. After action reports: After every deployment or exercise, each task force produces an AAR with a chronology of events, an evaluation of team effectiveness, recommendations for improvement, and lessons learned. We reviewed 32 AARs on responses to Hurricane Irene, Hurricane Isaac, Hurricane Sandy, the 2013 Oklahoma Tornadoes, the 2013 Colorado flooding, the 2013 Arkansas Tornado, and the Oso, Washington Mudslide. We found that each AAR includes a standard format for communicating and addressing issues that arose during the course of US&R’s Task Forces’ response to specific disasters. For example, in response to the 2013 Colorado flooding, each task force deployed (four in total) issued an AAR containing the areas cited above—which included a section on the task force’s performance on six elements: search, medical, rescue, safety, communications, and logistics. For each element, a description of the task, analysis of performance, and improvement action to be taken was reported, if applicable. Administrative readiness evaluations: These evaluations assess task forces on their readiness for deployment and include two parts: an annual self-assessment conducted by each task force and a triennial peer review, led by members of peer task forces. Both reviews use the same assessment instrument to evaluate task forces based on their operational, logistics, and management readiness. We reviewed 28 evaluations which included one evaluation for each of the 28 task forces for fiscal years 2012 through 2014. On the basis of the results of the peer evaluation, task forces may be deemed “fully operational”, “conditional,” or “non-operational”. If a task force is not fully operational, it must develop a corrective action plan in collaboration with officials from the FEMA US&R Branch and implement that plan. We found that 1 of the 28 US&R task forces has been in a conditional or non-operational status for 7 years. That task force was first placed on non-operational status in 2007 and regained conditional status in 2010, only to fall back to non-operational status in 2012. That task force was again placed on conditional status in 2013, non-operational in 2014, and in September 2015, FEMA announced that it would be removed from the US&R program. FEMA US&R officials said they had provided sufficient time for the task force to take corrective actions, but the task force failed to effectively respond. During our review, FEMA issued a draft program memorandum with administrative procedures for removing task forces that fail to regain fully-operational status within 2 years of being placed on non- operational status. According to the draft program memorandum, the task force will have the opportunity to appeal the decision for its removal. Operational Readiness Exercise Evaluation Program: This program requires task forces to conduct a large-scale training exercise every 3 years, develop a training plan based on that exercise, and update the plan annually. Task forces use the Exercise Evaluation Guide to assess their performance. We observed one of FEMA’s large-scale exercises in April 2015, where US&R task forces conducted three rescue scenarios and were evaluated on their performance. In addition, we reviewed the results for another large- scale training exercise and found that the reporting followed the criteria laid out in the US&R evaluation guide. Task forces receive a score of fully, partially, or not complete for tasks such as the ability to assemble personnel and equipment at designated location. Task force mobilization, deployment, tactical operations, and demobilization are some of the broad tasks assessed at the exercise conducted by Texas Task Force 1. Leading program management practices include conducting periodic reviews of the progress of the program in delivering its expected benefits, thereby enabling the organization to assess and enforce program conformance with organizational standards. By establishing these multiple approaches to assessing the program, and continually incorporating program changes thorough AARs, the advisory organization, and corrective action reviews, FEMA is positioned to respond to US&R program changing needs and requirements. The aging status of the task forces’ equipment has not yet been an operational issue identified by the various US&R assessments, but all 9 task forces we interviewed reported challenges funding the maintenance and replacement of their equipment caches. FEMA originally funded the caches between 1990 and 2005, including specialty equipment such as Hazmat/chemical, biological, radiological, nuclear, explosive, and water safety equipment (added after 9/11) and new communications equipment added between 2005 and 2007. While some items are low-cost and routinely replaced after use, like bandages, other items have a much longer service life, may require regular maintenance, and are costly to replace. For example, each task force has pneumatic powered tools, such as the strut system which is used to support collapsed buildings for search and rescue. The total strut kit, which consists of multiple expandable struts and other support equipment, cost about $72,000 in 2014, see figure 4 for an example of its use. Task force leaders we interviewed identified challenges in funding maintenance and upgrade of the equipment in their cache, along with adhering to recommended manufacturer shelf life requirements. For example, the standard US&R radio system is 10 years old and is becoming outdated. In addition, US&R hazmat equipment has a 5-year replacement cycle and is due for replacement. Through the US&R cooperative grant agreements, FEMA allocates about $155,000 to each task force annually, identified for equipment maintenance and acquisition. US&R team leaders said that the allocation covers equipment maintenance but is not sufficient to acquire or replace equipment. The 2013-2017 US&R Strategic Plan identified the need for the Logistics Functional Group within the advisory organization to develop a replacement life cycle analysis as part of a strategy to finance the replacement of high-cost items in the equipment caches. While FEMA program officials have not yet developed this strategy, they have drafted a position paper detailing the life cycle and costs (along with multiple replacement options) for one piece of critical equipment in the US&R cache—the self-contained breathing apparatus. In September 2015, FEMA replaced this piece of equipment (approximately $1.1 million) using funding that had been intended for the task force that was de- commissioned during the course of our review. In addition, FEMA changed the funding cycle for the annual grants from 1 year to 3 years beginning in 2015 in an effort to provide task forces more flexibility for high-dollar purchases. Task force managers we spoke with reported that a longer funding cycle could help them budget for equipment replacement. The increased flexibility in the annual grant funding cycle and the position paper for one of the high-cost items in the equipment caches represent progress towards aligning task force resources with US&R program goals. However, FEMA has not developed a comprehensive plan that would enable program managers and task force leaders to prioritize and fund the replacement of all items in the equipment cache. A key component of effective program management is committing resources that support the goals and strategic mission of the program. The Standard for Program Management calls for agencies to engage in resource planning to determine which resources are needed and when they are needed to successfully implement the program. FEMA program managers agreed that a comprehensive plan would help them better prioritize future high- cost equipment purchases, noting that they had not yet focused their management attention on this issue. Developing a plan to prioritize and fund equipment needs will help FEMA to ensure US&R teams have the equipment they need to fulfill their mission. FEMA uses some leading program management practices in implementing, assessing, and improving the IMAT program components but does not use other practices that would enhance program management. Specifically, FEMA lacks a standardized training plan for all national and regional IMAT members to effectively implement the program and has an inconsistent assessment process that limits its effectiveness. FEMA also has not developed a plan to address challenges related to staff attrition. FEMA Response Directorate officials have developed a number of strategic documents and policy guidance to provide goals and a management structure for implementing and managing the IMAT program in accordance with leading practices in program management. For example, FEMA’s Response Directorate Operating Plan outlines the IMAT role in disaster response, while the Response Directorate Strategic Plan establishes strategic goals for IMAT development and performance. The Response Directorate Strategic Plan also calls for continuing emphasis on quality of response teams, promoting a stable, flexible, and fully qualified workforce, and ensuring a robust training curriculum. The draft IMAT Procedures Guide provides details on IMAT protocols and the IMAT role in supporting FEMA’s mission, while the IMAT Program Directive outlines the agency-wide policy for administration, implementation, and oversight of the program. These documents also offer guidance outlining the overall disaster response procedure and position-specific duties on the IMAT. Establishment of clear goals and managerial structure is a leading practice for effective program management. Additionally, FEMA Response Directorate officials developed mechanisms to communicate program risks and address issues in IMAT program performance, another leading practice. For example, to share lessons learned and best practices after deployments and exercises, IMAT team leaders hold monthly meetings. These meetings provide an opportunity for team leaders to address challenges or problems that arise during incidents and work to establish strategies to resolve these issues. FEMA Response Directorate officials also told us that IMAT members participate in monthly meetings so that those performing the same job functions can share experiences and strategies for effective disaster response. FEMA officials also communicate potential program risks and performance issues through three strategic working groups, which address program-specific challenges in the areas of retention, training, and equipment. These groups allow IMAT members to discuss issues and share findings and recommendations for program changes. For example, one working group is exploring ways to centralize certain types of equipment to be used during catastrophic incidents. With the implementation of the new CORE IMAT program in 2013, FEMA Response Directorate officials also employed leading practices in program management for promoting program execution by enabling staff to obtain training. Specifically, they established a preliminary training program through the IMAT Academy and long-term training requirements for staff to acquire the requisite skills and abilities to effectively conduct their position-specific responsibilities and become fully qualified under the FEMA Qualification System (FQS). As part of their training, IMAT members first participate in the 14-week IMAT Academy, which includes orientation to FEMA’s emergency management system, team building, and real-world exercises. IMAT members are then to complete subsequent cadre-specific training courses at the Emergency Management Institute and build experience through on-the-job training during disaster deployments and exercises to become qualified under FQS. However, IMAT leaders at the regional and national levels expressed concerns about limited access to training opportunities after the academy as well as limited funds available to enable IMAT members to fulfill training requirements. Specifically, all 10 regional IMAT representatives and 1 of 3 national team leaders said that there was not sufficient funding or access to training opportunities for staff during their 4-year contracts as CORE employees. Regional IMAT team leaders said that many required courses through the Emergency Management Institute are not offered frequently enough for IMAT members to attend, or have not yet been developed. IMAT leaders also said that limited funds and infrequent disasters result in inconsistent training across teams. They also said that since their regions do not have budgets dedicated to IMAT training, they do not track costs associated with regional IMAT training. FEMA Response Directorate officials told us that cadre managers in FEMA headquarters are responsible for ensuring that staff in their cadres have access to appropriate courses. Regional officials from one region also told us that without planning to ensure consistent access to required courses, it could take 2 years for some IMAT CORE members to complete all their cadre-specific requirements. Although state emergency managers reported having positive experiences and strong relationships with the previous IMAT teams staffed by more experienced permanent full time FEMA employees, they expressed concerns about a lack of qualified staff on the new CORE IMATs. For example, officials from two states said that the previous IMATs were very experienced and performed a key role in providing management assistance during Hurricane Sandy in 2012. An official from another state described crucial support provided by the previous IMATs during the response to the Oso mudslide in 2014, including providing technical and subject matter assistance and coordinating federal resources. State officials had limited interactions with new CORE IMATs but described mixed experiences. For example, officials in two states expressed positive views of IMAT assistance and the states’ overall relationships with the new IMATs, while officials from two other states where the new IMAT teams had deployed expressed concerns about the lack of experience among the new teams in performing key duties during disaster response. Specifically, they told us that they spent additional time and resources “training FEMA staff on their state processes,” taking up time that they could have spent working on their state’s disaster response. We have assessed FEMA’s workforce planning, including similar issues related to training and the FQS system, in our prior work on FEMA’s Reservists (temporary disaster response employees that FEMA deploys, as needed, to specific disasters). Specifically, In 2012, we reported that FEMA lacked long term plans and goals related to training, and we identified the need for FEMA to establish timelines and a system to track training costs for its Reservist workforce. To improve FEMA’s workforce planning and training efforts, we recommended that they identify long-term goals, establish timeframes for developing performance measures, and develop a process to collect and analyze workforce and training data. In April 2015, FEMA said it would issue a Human Capital Strategic Plan addressing these recommendations by September 2015. However, we have not yet received documentation of this new plan. To improve management and training in the Reservist program, we recommended that FEMA take steps to improve monitoring and communication of program policies across all regions, establish criteria for program hiring, establish a more rigorous performance appraisal system, and implement training milestones and a mechanism to track training costs. FEMA has taken steps to address these including updating policies and guidance, centralizing management of the program, implementing a new communication strategy, standardizing hiring criteria, establishing a training plan with milestones, and establishing a system to track training costs. According to FEMA officials, agency guidance regarding the performance management system for Reservists was due to be developed by July 2015. We have not yet received pending documentation confirming issuance of this guidance. FEMA Qualification System (FQS) FEMA’s FQS is the latest initiative in FEMA’s ongoing efforts to credential its w orkforce. According to agency officials, FQS is intended to standardize and streamline the certification process for all FEMA employees, in comparison to prior credentialing efforts which focused on temporary Disaster Assistance Employees. As part of FQS, FEMA established performance and training standards for each FEMA disaster-related position. The FQS system is intended to certify an employee’s status based on the employee’s recognized performance and know ledge, as w ell as the training the employee has completed, measured against established standards. Under the FQS, individuals are assigned a qualification title of entry-level “trainee” or the more experienced title of “qualified” based on training and experience levels. deploying to disasters were not all trained to the FQS level to which they were assigned. We found that these long-standing challenges continue to impact the IMAT program. In particular, we reported on steps FEMA is taking to address longstanding workforce challenges related to the DHS Surge Capacity Force and FEMA Corps. We made five recommendations, including for FEMA to improve recruitment track costs associated with its workforce, and improve program performance tracking. FEMA concurred with our recommendations; however we have not received documentation on actions it has taken or plans to take in response to these recommendations. According to leading practices on workforce training, agencies should plan to ensure sufficient training opportunities as well as track cost and performance of training programs to ensure effective program execution. Further, leading practices in human capital management call for federal agencies to develop long-term strategies for developing staff to achieve programmatic goals. Finally, the 2015 IMAT Program Directive requires all IMAT members to be trained according to FQS guidelines for incident management and incident support positions. FEMA Response Directorate officials said they had not developed an IMAT workforce plan to meet the training and funding needs of the new CORE IMATs because the program was early in its implementation. The officials also said that ensuring access to training specific to each cadre is the responsibility of cadre managers, not the IMAT program. To address regional officials’ concerns about access to IMAT training opportunities, FEMA Response Directorate officials said they intended to develop a standard IMAT training program by forming a strategic working group. The working group’s proposed IMAT training program will include ongoing training at the IMAT Academy for both experienced and new IMAT members, annual validation training, and quarterly exercises and training to improve interoperability among regional and national IMAT teams. They intend to work with the Emergency Management Institute to make courses available for IMAT members and implement the new training program by January 1, 2016. However, these efforts do not address the cadre-specific training needs of CORE IMAT members. FEMA Response Directorate officials said they also intended to take steps in response to concerns about limited training budgets raised by regional officials. Specifically, FEMA Response Directorate officials said they updated their budget planning documents in September 2015 to account for funds for IMAT training and program costs in fiscal year 2016; IMAT leaders told us that previously the FEMA Response Directorate did not have a budget allocation specific to IMAT training. They said they intend to provide annual funding for the new regional CORE IMAT teams from the Disaster Relief Fund. Though FEMA Response Directorate officials have established a working group to develop a training program and intend to begin accounting for regional IMAT training and other program costs, the process is ongoing, and we cannot yet assess the its effectiveness or determine whether these steps will help to address the challenges we have identified related to access to and funding for IMAT training. Further, FEMA has not developed a comprehensive training plan for its IMAT members that links the IMAT training and cadre-specific training requirements to available training opportunities to ensure timely completion of the requirements. Such a plan would also help program officials better anticipate and budget for the costs of implementing the training needed for the new CORE IMAT teams to become fully qualified under FQS. Without a comprehensive plan to ensure sufficient training opportunities as well as to track cost and performance of IMAT-specific and cadre training programs, IMAT program managers will continue to face challenges in implementing their new 2015 IMAT Program Directive and ensuring that IMAT teams consistently have the skills and qualifications to fulfill their disaster response duties. FEMA demonstrates leading practices in program management including conducting periodic program reviews, developing metrics to track program performance, and creating a venue to address issues of program performance. FEMA demonstrates these leading practices through several assessment mechanisms that evaluate IMAT readiness, report on IMAT performance, and gather information that can be used to make program-wide changes. Operational readiness evaluations: These annual assessments measure the IMATs’ ability to deploy to disasters and assist state and local partners, including measuring performance in the areas of personnel, management, training, and equipment of each IMAT team. In our review of all 10 operational readiness evaluations conducted in 2014 for both regional and national IMAT teams, we found that each team received a 90 percent (or higher) score. For example, according to their 2014 operational readiness evaluation, the national IMAT West team demonstrated a strong performance in the areas of management and personnel, as well as effective use of communications equipment during its 2014 exercise. However, the exercise evaluation also pointed out that the team had several personnel vacancies that needed to be filled, as well as a lack of FQS qualification for many team members. Of the 10 teams that conducted an annual operational readiness exercise evaluation in 2014, the national IMAT West team was the 1 team that had adopted the new CORE IMAT structure, while the 9 other teams being reviewed were previous teams staffed by permanent full-time employees. As a result, the majority of the most recent operational readiness evaluations available at the time of our review did not assess FEMA’s IMAT teams under its new staffing model. Thunderbolts: FEMA conducts annual “Thunderbolt” exercises, which are no-notice events to evaluate IMAT readiness in such areas as mobilization, communications readiness, and deployment to operations-based exercises simulating a catastrophic disaster environment. FEMA has previously used findings from these exercises to make changes to the IMAT program, including implementing recommendations to expand the teams and improve IMAT training. DHS Annual Performance Reports: FEMA also gathers and reports on IMAT preparedness and performance as part of the DHS Annual Performance Report. As part of this reporting, FEMA has developed annual performance metrics for IMATs, including the ability of IMAT teams to deploy to and stabilize an incident within 72 hours and establish joint objectives with state partners within 18 hours. FEMA Response Directorate officials capture and analyze this data through the National Watch Center, which tracks IMAT status and deployment time after disaster declarations. For fiscal year 2014, the DHS Annual Performance Report stated that 100 percent of IMAT teams met their targets for these two measures. In addition, the IMAT program has established individual and team-based performance measures to evaluate each individual’s ability to carry out his or her own responsibilities within a given time frame. After-action reports: IMATs are required to produce AARs after disaster deployments to assess functions and tasks carried out during the deployment along with lessons learned, best practices, and areas needing improvement. Program officials in FEMA headquarters are to review these reports after every deployment. Additionally, FEMA Response Directorate officials drafted an IMAT Procedures Guide with requirements and a template for AARs that all regions are expected to use after every deployment. These requirements for after- action reporting create a venue for FEMA Response Directorate officials to review and address issues of IMAT program performance. FEMA Readiness Assessment Program: The FEMA readiness assessment program evaluates performance and overall team readiness of IMAT teams as well as other teams involved in response and recovery. The readiness assessment program is a group within the Office of Response and Recovery that gathers data through observing exercises and conducting reviews after disaster deployments. Reviewers may then record their observations and, in some cases make recommendations in an Excel spreadsheet. FEMA program officials may then use these findings to conduct trend analyses to identify common themes or areas for improvement after exercises or a response. Conducting annual reviews, developing metrics and tracking performance, assessing progress and addressing issues of program performance of the IMAT program reflects leading practices in program management. However, while FEMA has demonstrated some leading program management practices in establishing requirements for these assessments, we found inconsistencies in IMAT program after-action reporting as well as limitations in FEMA’s use of the FEMA Readiness Assessment Program to conduct comprehensive IMAT program analysis. Specifically, we found a lack of consistency in how frequently IMATs produce AARs after deployments to disasters or after full scale exercises, what information they include in the reports, and how they share the results. According to our discussions with regional teams and our analysis of data provided by FEMA, not all regions produce AARs after every deployment. For example, 6 out of 10 regional IMATs stated that they produce AARs after every major deployment and none of the 3 national IMAT teams produced AARs since the implementation of the CORE staff in 2013. Four of 10 regional IMATs do not include improvement plan matrices in their AARs to track lessons learned and recommendation implementation. Despite the fact that IMAT guidance requires an AAR after every deployment, five of 10 regional IMATs said that they do not produce and share AARs with FEMA headquarters after every disaster deployment. In addition, the 2015 IMAT directive does not include requirements for FEMA headquarters IMATs or regional IMATs to track implementation of AARs’ recommendations, perform trend analysis across teams and AARs, or use findings to enact system-wide policy changes. Similarly, while the FEMA Readiness Assessment Program creates a venue to analyze IMAT program trends, there is no guidance for how these assessments will be used to evaluate the IMAT program. Specifically, while FEMA Response Directorate officials described the readiness assessment program as the primary means to analyze IMAT program trends, IMAT guidance does not establish policies or procedures detailing what are to be included in the assessments, when program officials are to conduct them, or how program officials plan to use the results. Furthermore, IMAT guidance includes no mention of the Excel spreadsheet or how it should be used. Response Directorate officials told us that IMAT teams do not generally use the spreadsheet to share feedback on program performance. According to The Standards for Program Management, “agencies should collect, measure, and disseminate performance information and analyze program trends, and point to areas in need of adjustment” and programs should conduct periodic program reviews to assess program viability and provide a venue to assess program conformance with organizational standards. FEMA Response Directorate officials acknowledged the inconsistent implementation of the AAR requirement in their program directive. They also said that they had not required that all teams use the template for AARs in the IMAT Procedures Guide because the document was in draft, but as of September 2015 they are requiring teams to use this template. Finally, Response Directorate officials told us that, although they do not have a system to track and document recommendations and their implementation, IMAT leaders share lessons learned and best practices during monthly team leader conference calls. However, without documenting the issues raised and tracking their resolution, FEMA’s ability to effectively use the information shared during these discussions to improve the program will be limited. Similarly, without policies or procedures that describe how FEMA Response Directorate officials will track recommendation implementation, perform trend analysis, or otherwise use readiness assessment program’s findings to enact system- wide policy change for the IMAT program, FEMA lacks assurance that the data gathered will be used to improve the effectiveness of the IMAT program. Since implementing the new CORE IMAT concept in 2013, the IMAT program has experienced high attrition rates of new CORE employees across all regional and national IMATs but program managers do not routinely gather data on attrition and have not developed a strategy to improve program retention. According to data provided by FEMA in September 2015, the IMAT program has experienced approximately 40 percent attrition across its 3 national teams since 2013, and all 7 regional IMATs that transitioned to the CORE concept in 2013 and 2014 reported some attrition. Discussions with IMAT leaders conducted by the strategic working group on retention revealed that turnover can have a negative impact on IMAT performance, relationships with state and other partners, and team cohesion, and it may limit the return on investment of hiring and training new CORE staff. See table 1 for FEMA’s transition to its new IMAT teams and attrition. According to IMAT officials from 9 of 10 regions and 1 of 3 national IMATs key reasons cited for the attrition in the initial years of implementing the program are the relatively low pay and lack of upward mobility for CORE IMAT members. FEMA’s new pay-for-performance system for CORE employees starts new staff at a pay rate lower than that of the permanent full time employees previously staffing the IMATs, and team members rely on disaster deployments and training exercises to receive performance-based pay raises and bonuses. Because pay-for- performance is tied to disasters and training, team leaders said that it can be challenging for team members to earn higher pay when there are not opportunities to deploy to disasters and limited training opportunities. Further, high attrition in the IMAT program can be costly because of the investment required to hire and train new staff. For example, as described above, all new IMATs must participate in a 14-week IMAT Academy. According to FEMA, this costs approximately $39,000 per team member. High attrition results in additional costs to FEMA to continually train new staff to replace those who leave before completing their 4-year contracts. For example, total IMAT attrition cost FEMA $2.2 million in additional IMAT Academy training costs for training replacement CORE IMAT team members in fiscal years 2013, 2014 and 2015, based on FEMA’s estimated cost per member. In response to concerns about attrition, FEMA Response Directorate officials established a working group to address IMAT retention in July 2015. According to FEMA officials, they plan to speak with all team leaders and begin to gather data on the reasons for IMAT staff attrition. FEMA Response Directorate officials stated that the team will analyze and present its findings to program managers in December 2015. However, FEMA officials told us that their Human Capital Office does not have a process for systematically tracking IMAT attrition. Our prior work on leading practices in human capital management has found that federal agencies should develop long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. Additionally, according to The Standards for Program Management, “agencies should collect, measure, and disseminate performance information and analyze program trends.” Without a strategy that includes a process for systematically gathering attrition data and a plan to retain CORE employees, FEMA will continue to face potential impairments to IMAT readiness and increased program costs as team members continue to leave. After developing the original NMETS program in 2007, NMETS program officials decided to discontinue development and support of NMETS in 2008. They said this was the result of their discussions with state officials in Gulf Coast states, including Louisiana and Texas who said they had purchased their own evacuation tracking systems and did not need NMETS. NMETS program managers decided to resurrect the program in 2009, after Louisiana officials identified continuing issues associated with their ability to track critical transportation needs of survivors in Louisiana during Hurricane Gustav. Program managers said although there were issues regarding the system software, they provided test versions of NMETS to 8 states (in 5 of FEMA’s 10 regions) in 2010 to solicit feedback. However, in 2011 and 2012, due to deployments to Hurricanes Irene and Sandy, the associated demands on the program managers significantly limited the work and funding on NMETS. As a result, they did not follow up with the eight states that had tested the most recent iteration of NMETS to identify any suggestions for improving the system. Following Hurricanes Irene and Sandy in 2013, NMETS program officials participated in two workshops in Chicago sponsored by a Regional Catastrophic Planning Team because of the focus on evacuations of several of the team’s projects as grantees of FEMA’s Regional Catastrophic Grant Program. During the workshops in 2013 and 2014, program officials worked with the team to test and assess aspects of NMETS, such as the ability to access NMETS from a state’s information system at multiple locations to generate reports and enroll evacuees into NMETS. According to NMETS program officials, they used the results of these assessments to further improve and revise the system. For example, they said they developed a way to access NMETS via the Internet and use the system to locate evacuees and unaccompanied minors to facilitate reunification of family members. In order to more consistently manage the program, NMETS program officials drafted an NMETS Strategic Implementation Plan in January 2015 to provide guidance to FEMA regional offices for communicating and training state and local officials on the use and implementation of NMETS. The draft plan establishes goals and objectives and calls for a routine forum of NMETS users to review issues and concerns on application functionality and lessons learned. Officials also said they developed a licensing agreement, which includes the terms and conditions of NMETS use. During 2015, NMETS program officials provided the NMETS software and conducted webinars with all 10 FEMA regions and provided the NMETS licensing agreement to several states. NMETS program officials also told us that they plan to conduct additional presentations to FEMA Regions II, III, and IV in fiscal year 2016. (See figure 5 for the NMETS implementation and assessment timeline since fiscal year 2007.) FEMA regional officials emphasized that NMETS is an optional evacuation tracking tool and most said that states’ interest in the system was limited. Specifically, FEMA regional officials in 7 out of the 10 FEMA regions (accounting for about 39 states and territories) reported that their states and territories were either not planning to use NMETS or, still considering whether to use it. Regional officials reported that 3 states are planning to use NMETS in case of an evacuation. FEMA regional and selected state officials told us that positive features such as the ability to track unaccompanied minors, or the states’ ability to own the NMETS software without paying leasing fees were the reasons they are electing to use the software. Conversely, regional and selected state officials told us that reasons for not electing to use the NMETS software included a lack of resources to support or maintain the NMETS system (e.g. laptops and wristbands) or staff to manage the system (e.g. staff needed to enter information into the system), a lack of system compatibility between NMETS and the state’s internal database system to exchange data, a pre-existing state tracking system, or the lack of a perceived need for an evacuation tracking system. States’ use of NMETS as of fiscal year 2015 is depicted in figure 6. NMETS program officials said they are taking steps to address NMETS concerns identified by states such as finalizing the implementation plan and conducting a workshop on mass care and evacuation assistance in fiscal year 2016. FEMA intends to finalize the Strategic Implementation Plan as part of a national planning effort to revise the Mass Evacuation Incident Annex (for Emergency Support Function 6) to the National Response Framework; they estimated the process would take 9 to 15 months. Because the process is ongoing, we cannot yet determine whether the steps described by the program officials will help to address historical inconsistencies of FEMA management of the NMETS program. In the years since Hurricane Katrina, FEMA has taken steps to improve its ability to respond rapidly and effectively to disasters for three key programs and has incorporated many leading program management practices into these efforts. By clearly defining the US&R program’s goals, communicating its guidance and policies, and ensuring the goals are met through continual program assessments and refinements; FEMA has created an environment for continuing assessment and improvement. However, FEMA does not have a program strategy for replacing and maintaining high-cost equipment, which would help further improve its management of the US&R program and better prioritize future equipment purchases to strengthen the task forces’ readiness and capabilities to respond to disasters. Similarly, the clear policies and procedures, readiness goals and assessment mechanisms, FEMA has established for the IMAT program, will help program managers in transitioning to its new CORE IMAT approach. However, changes in the program since Hurricane Sandy have created new challenges for program officials in training IMAT members and assessing the results of deployments, as well as costly and disruptive attrition at both the national and regional levels. Without a comprehensive plan to ensure sufficient training opportunities, FEMA lacks assurance that teams will have the skills and qualifications to fulfill their disaster response duties. Further, without policies or procedures that describe how FEMA will track implementation of recommendations and lessons learned from past deployments, FEMA’s ability to improve the effectiveness of the IMAT program will be limited. Finally, until FEMA develops a more organized and systematic approach to understanding and addressing underlying attrition issues, FEMA will continue to face potential impairments to IMAT readiness and increased program costs as team members continue to leave. To enable FEMA to and more effectively respond to disasters, we recommend the Secretary of Homeland Security direct the FEMA Administrator to: 1. develop a comprehensive plan to prioritize and finance the replacement of equipment for the US&R task forces, 2. develop a comprehensive training plan that links the IMAT training and cadre-specific training requirements to available training opportunities to help ensure timely completion of the requirements. 3. implement a process to document, track, and analyze recommendations and implement lessons learned from Regional and National IMAT teams after disaster deployments, and 4. develop a workforce strategy to manage and improve retention that includes a process for systematically gathering attrition data and a plan to retain IMAT CORE employees. We provided a draft of this report to DHS for their review and comment. DHS provided written comments on January 21, 2016, which are summarized below and reproduced in full in appendix VI. DHS concurred with all four recommendations and described planned actions to address them. In addition, DHS provided written technical comments, which we incorporated into the report as appropriate. DHS concurred with our first recommendation that FEMA develop a comprehensive plan to prioritize and finance the replacement of equipment for its US&R task forces. DHS stated that FEMA’s US&R program managers and its Strategic Group have been working with FEMA Operations Division leadership to determine the appropriate method to address necessary equipment replacement for US&R task forces. They plan to develop a comprehensive strategy that prioritizes needed equipment replacements, as well as potential courses of action to finance these replacements. DHS estimated that the will be completed by November 30, 2016. These actions, if implemented effectively, should address the intent of our recommendation. DHS also concurred with our second recommendation that FEMA develop a comprehensive training plan that links the IMAT training and cadre-specific training requirements to available training opportunities to help ensure timely completion of the requirements. DHS stated that the FEMA Field Operations Directorate is currently conducting an analysis of the IMAT program that will identify key operational requirements for National and Regional teams. As an outcome of this analysis, the Directorate plans to develop a comprehensive training and exercise program for the IMATs. DHS estimates that these actions will be completed by August 31, 2016. These actions, if implemented effectively, should address the intent of our recommendation. DHS concurred with our third recommendation that FEMA implement a process to document, track, and analyze recommendations and implement lessons learned from Regional and National IMAT teams after disaster deployments. DHS stated that FEMA is developing and implementing formal procedures to document, track, analyze and incorporate lessons learned into annual training and exercise requirements as well as policies and performance measures applicable to the IMAT program. DHS estimates that these actions will be completed by June 30, 2016. These actions, if implemented effectively, should address the intent of our recommendation. DHS concurred with our last recommendation that FEMA develop a workforce strategy to manage and improve retention that includes a process for systematically gathering attrition data and a plan to retain IMAT CORE employees. DHS stated that FEMA is conducting an analysis of the IMAT program to include a review of attrition data. FEMA stated that it is also conducting an IMAT employee satisfaction survey to develop a greater understanding of employee concerns within the Program and plans to use the findings of the analysis and employee satisfaction survey to develop a strategy to address workforce management of IMAT CORE employees. DHS estimates that these actions will be completed by June 30, 2016. These actions, if implemented effectively, should address the intent of our recommendation. We will send copies of this report to the Secretary of Homeland Security, the FEMA Administrator, and appropriate congressional committees. If you or your staff have any questions about this report, please contact me at (404) 679-1875 or curriec@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report are listed in appendix VII. FEMA assigns each major disaster a “disaster declaration number,” preceded by the abbreviation “DR.” Not applicable (n/a) Regions IV, VI and IX each have 2 Regional IMAT teams. In addition to the contact named above, Christopher A. Keisling (Assistant Director), Aditi S. Archer (Analyst-in-Charge), Lorraine Ettaro, Jillian Feirson, Eric Hauswirth, Tracey King, Amanda Parker, Rachel Pittenger, Tovah Rom, and Su Jin Yon made key contributions to this report.
In a disaster requiring a federal response, the Department of Homeland Security's FEMA provides various response resources to state, local, and tribal governments. Such assistance can include deploying US&R teams to help locate survivors and human remains, IMAT teams to help coordinate and provide federal support, and evacuation assistance, when applicable. GAO was asked to review aspects of FEMA's disaster response programs. Specifically, this report addresses FEMA's efforts to implement, assess, and improve selected disaster response programs for urban search and rescue, incident management, and evacuation tracking. GAO reviewed documentation such as policies, procedures, after action reports, and readiness assessments for these programs and deployments to select disasters for fiscal years 2010 through 2014—capturing pre and post Hurricane Sandy disasters. GAO also interviewed FEMA and state officials, and a nongeneralizable sample of nine US&R task forces to gain insights into FEMA's efforts. The Federal Emergency Management Agency (FEMA) has taken steps to implement, assess, and improve select disaster response programs, but GAO identified opportunities to strengthen program management. Specifically, GAO found that FEMA uses leading management practices in implementing its Urban Search and Rescue (US&R) program. For example, FEMA has aligned the mission of the US&R Program--to save lives and reduce suffering in communities impacted by a disaster-- with its goal setting efforts in its US&R Strategic Plan. It also communicates program risks to stakeholders and assesses performance so the program can be continuously strengthened. However, all nine US&R task forces GAO interviewed reported challenges funding the maintenance and replacement of their aging equipment to ensure that it is not outdated and adheres to manufacturer standards. FEMA has not developed a plan to prioritize and fund the replacement of this equipment and doing so would help ensure that these task forces are capable of meeting their important response mission. FEMA applies some leading program management practices in implementing, assessing, and improving its Incident Management Assistance Teams (IMAT)—such as setting strategic goals and identifying program risks—but does not use other practices that would enhance program management. National and regional IMAT team members are comprised of FEMA employees hired on temporary 4-year contracts. GAO found that FEMA lacks a standardized plan to ensure that all national and regional IMAT members receive required training, and IMAT teams do not always develop after action reports after disaster deployments and document lessons learned. GAO also found that the IMAT program has experienced high attrition across national and regional IMAT teams—since its implementation in fiscal year 2013—and FEMA has not developed a strategy to address these challenges. Developing a plan to address training and retention challenges would help FEMA better meet IMAT program goals. FEMA's efforts to implement, assess, and improve its evacuation tracking system nationwide have been inconsistent due to lack of state and local resources and interest in using the system. However, FEMA officials said they are taking steps to address concerns raised by users of the system, including technical issues with the software. For example, FEMA has developed a new implementation plan to provide guidance to its regional offices for better communicating and training state and local officials on the use of its tracking software and intends to finalize a system strategic plan in the next nine to 15 months. Since these efforts are ongoing, GAO cannot yet assess the extent that they will address the inconsistences or user concerns with the system. GAO recommends that FEMA develop a plan to prioritize and fund the replacement of US&R task force equipment; a plan to ensure that IMAT teams receive required training, and a workforce strategy for retention of IMAT staff; and document, track, and analyze recommendations and lessons learned from disaster deployments. DHS concurred with the recommendations and described plans to implement them.
Community and migrant health centers are financed in part with federal grants administered by HRSA. HHS awards grants to public and nonprofit entities to plan, develop, and operate health centers for medically underserved populations. To assist in providing health care to these groups, HHS awarded over $750 million in grant assistance in fiscal year 1996. Like all patients, those receiving care from community or migrant health centers may seek compensation for medical malpractice if they believe the treatment they receive does not meet an acceptable standard of care. Patients may seek payment for economic losses such as medical bills, rehabilitation costs, and lost income; and noneconomic losses such as pain, suffering, and anguish. To obtain protection against malpractice claims before FTCA coverage became available, most centers had purchased private comprehensive malpractice insurance. The Congress enacted the Federally Supported Health Centers Assistance Act of 1992 (P.L. 102-501) to provide FTCA medical malpractice coverage to community and migrant health centers. This law made FTCA coverage available to grantees for a 3-year period beginning January 1, 1993, and ending December 31, 1995. It provided centers an opportunity to reduce their malpractice insurance expenditures. The Congress extended the availability of permanent FTCA coverage to centers in December 1995. FTCA coverage, which is provided at no cost to the centers, is an alternative to private comprehensive malpractice insurance and gives centers a chance to redirect their savings to the provision of health services. Centers opting for FTCA coverage may decide to purchase a supplemental or “gap” policy to cover events not covered by FTCA. Even with the purchase of a gap policy, HRSA expects that centers will spend less on insurance than they would if they continued to purchase comprehensive coverage. In a center not covered by FTCA, patients or their representatives would file a malpractice claim with the private carrier insuring the provider. Insurers are generally responsible for investigating claims, defending the provider, and paying any successful claims, up to a stated policy limit. If not resolved by the insurer, a claim could result in a lawsuit filed in state court. In addition to insuring centers against instances of malpractice, insurers may provide risk management services. Private carriers generally view these services as a way to reduce the incidence of malpractice, and in turn, reduce or minimize their liability. Malpractice claims against FTCA-covered centers are resolved differently from those filed against centers with private insurance. Patients of FTCA-covered centers must file administrative claims with HHS. Claims must be filed within 2 years after the patient has discovered or should have discovered the injury and its cause. Under FTCA procedures, the claim is filed against the federal government rather than against the provider. After reviewing the claim, the HHS Office of General Counsel may attempt to negotiate a financial settlement or, if it finds the case to be without merit, it may disallow the claim. Claimants dissatisfied with HHS’ determination have 6 months to file a lawsuit against the federal government in federal district court. Claimants may also file suit if HHS fails to respond to their claims within 6 months of receipt. If a claim results in the filing of a medical malpractice suit, the Attorney General, supported by the Department of Justice (DOJ), represents the interest of the United States in either settling the case out of court or in defending the case during the trial. If the claim continues to trial, the case is heard in a federal district court without a jury; punitive damages cannot be awarded. Protection against malpractice claims through FTCA has been provided to federally employed health care providers since 1946, when the government waived its sovereign immunity for torts, including medical malpractice. Prior to this date, individuals were prohibited from bringing a civil action against the federal government for damages resulting from the negligent or other wrongful acts or omissions of its employees acting within the scope of their employment. Since then, the federal government defends malpractice claims made against federal employees practicing medicine at agencies such as the Department of Veterans Affairs, the Indian Health Service, and the Department of Defense, so long as those practitioners were providing care within the scope of their employment. While FTCA coverage may reduce centers’ insurance costs, it imposes a potentially significant liability on the federal government because FTCA does not limit the amount for which the government can be held liable. Private policies generally limit the amount that can be paid on a claim, typically to $500,000 or $1 million. The total amount paid for all claims is also usually limited. For example, a policy with coverage limits of $1 million/$3 million will pay up to $1 million for each claim and no more than $3 million for all claims annually. As FTCA does not specify a monetary limitation, payments could be substantially higher than the monetary limits of private malpractice insurance policies. While most eligible centers did not rely on FTCA coverage during the demonstration period, centers now seem to be taking greater advantage of the opportunity to reduce their costs. The number of centers relying on FTCA coverage appears to have increased significantly. During the demonstration period, all centers were required to apply for FTCA coverage but did not necessarily cancel their private comprehensive malpractice insurance. As a result, most centers incurred the cost of private insurance during the demonstration period and were not relying on FTCA coverage. As of March 21, 1997, 452 of 716 eligible centers have applied for FTCA coverage. HRSA has told centers to cancel private comprehensive malpractice insurance when they come under FTCA but remains uncertain, as it was in the demonstration period, about which FTCA-covered centers have actually terminated that insurance and are thus not paying for duplicate coverage. During the demonstration period, many centers were uncertain FTCA coverage would be permanently extended and retained private insurance. Centers feared that converting back to private comprehensive malpractice insurance, if an extension was not enacted, would be both difficult and costly. Others were concerned about the possibility that not all claims would be covered by FTCA. While HRSA permits centers to combine gap policies with FTCA coverage, the expense and difficulty associated with obtaining gap coverage was an additional concern. The permanent extension of FTCA and provisions in the new law appear to have eased many of the centers’ concerns. Since the demonstration began, private insurers have developed more gap policies to insure against incidents not covered by FTCA. The new law made FTCA coverage optional for centers. Centers that do not want FTCA coverage are no longer required to apply for it. In addition, the new law addressed other concerns raised by the centers during the demonstration period. For example, FTCA coverage was expanded to include part-time practitioners in the fields of family practice, general internal medicine, general pediatrics, and obstetrics and gynecology. Centers were also given greater assurance that the federal government would cover their claims. During the demonstration period, DOJ could invalidate HHS’ decision to grant a center FTCA coverage after a claim was filed. Now, HHS’ decision is binding upon the Attorney General. The possibility of reducing center costs also influenced many of the center officials with whom we spoke. For example, one center in New England reported its malpractice insurance costs were reduced by almost $600,000 since 1993. A center official there told us that the savings have been used to improve medical staff retention and will also be used to expand patient programs. Another center in the Midwest reported savings of $350,000. Of the center officials we spoke to who now intend to rely on FTCA coverage, all reported the opportunity to reduce costs as the main factor in choosing FTCA over private comprehensive malpractice insurance. Although FTCA participation appears to have grown substantially since the demonstration period, not all centers have opted for FTCA coverage. Of the approximately 716 centers currently eligible for this coverage, 264 of the eligible centers, or 37 percent of them, have not applied for it. FTCA is still a relatively recent option for centers and some center personnel may be questioning the desirability of this coverage for their facility. Uncertainty about which practitioners and services are not covered by FTCA, the availability of private policies to cover any gaps, and questions about the FTCA claims resolution process may all contribute to a center’s decision to retain private coverage. Center officials from two southern states told us that their malpractice premiums were low enough that there was little incentive to convert to FTCA coverage. Officials from other centers that do not have FTCA coverage told us that resistance from the medical staff and the loss of tailored risk management services are also contributing factors in their decision to keep private insurance. Few of the 138 FTCA claims filed against health centers since the beginning of FTCA coverage have been resolved. Although the number of FTCA claims filed against centers has increased since the demonstration period began in 1993, only five settlements have been made and all have been relatively small. Table 1 shows the number of claims filed and compensation sought and awarded by fiscal year. In addition to the five claims that have been settled, seven others have been disallowed by HHS. The total amount of compensation sought by the 126 remaining claimants is in excess of $400 million. Thirty-two FTCA claims have resulted in lawsuits that have been filed in federal court. The 94 remaining claims are pending in HHS. Current claims and settlement experience may not be an accurate indicator of future claims. Although claim payments to date have been relatively small, one large settlement or court award could dramatically increase the total. Other factors also make it difficult to predict future payments. There may be a time lag between alleged instances of malpractice and claim filings, as claimants have 2 years from the date of the alleged incident to file a claim. However, a prior analysis of claims reported by centers before the demonstration period showed that their claims experience was considered favorable by actuaries in relation to the insurance premiums they paid. HRSA has drafted a legislative proposal limiting the federal government’s liability for FTCA claims filed against migrant and community health centers. This proposal, initially recommended by HHS’ OIG and currently under review by the Secretary of HHS, calls for capping the amount a claimant may seek in damages from an FTCA-covered center at $1 million. This would be comparable with the $1 million cap per claim that private insurance carriers typically place on malpractice policies, including those sold to health centers. If enacted, this proposal would, for the first time, limit the federal government’s liability under FTCA and would be an exception for only federally funded health centers. According to HHS’ OIG report, this cap could save the federal government as much as $30.6 million over a 3-year period, if all health centers elected FTCA coverage. Of the 126 unresolved FTCA claims, which include the 32 pending lawsuits, 59 seek compensation in excess of $1 million. HRSA’s collection of FTCA participation data has been limited. This information is necessary to determine whether FTCA coverage is reducing health centers’ costs and is also critical to the agency’s ability to provide risk management. Although HRSA has attempted to collect data related to centers’ use and savings under FTCA, these attempts have not been effective. HHS has also failed to respond to claimants in a timely manner, which gives them the opportunity to file lawsuits in federal court. While HRSA intends to provide centers with some risk management services, it has not developed a comprehensive risk management plan and presently does not intend to provide some of the important risk management activities currently provided by private insurers and other federal agencies. HRSA cannot accurately report the amount centers spent on comprehensive private malpractice insurance during the FTCA demonstration period, nor can the agency report with certainty the total cost reductions realized by FTCA-covered centers during that period. HRSA officials were unable to identify those centers that canceled these comprehensive policies during the demonstration period and relied on FTCA coverage. Although HRSA collected data from centers regarding their insurance costs and savings under FTCA, we found that these data were not reliable for determining whether centers canceled their private comprehensive malpractice insurance and reduced their costs. The form HRSA provided to centers was not accompanied by instructions. In addition, the form did not provide centers with a means of reporting and identifying all of their malpractice insurance expenditures. Consequently, centers may have supplied inappropriate data or reported expenditures inaccurately while other information, critical to determining actual cost reductions, was not obtained. Without reliable information on centers’ reliance on FTCA it will be difficult for HRSA to target its limited risk management services on FTCA-covered centers. Similarly, without sound data on cost reductions, HRSA will be unable to determine if coverage under FTCA saves centers money. HRSA is now taking steps to end dual coverage, which has hampered HRSA’s data collection efforts and oversight of FTCA. While HRSA advised centers in April 1996 that they must choose between FTCA coverage and private comprehensive malpractice insurance, it did not establish a date after which duplicate insurance will no longer be an allowable charge to the grant at centers with FTCA coverage. We spoke with officials at 27 centers with FTCA coverage. Of those 27 centers, 6 were also covered by private comprehensive malpractice insurance. We subsequently advised HRSA that a deadline was needed to ensure that health centers reduce their costs by terminating duplicate coverage. HRSA officials agreed and recently issued a directive to FTCA-covered centers to cancel their private comprehensive malpractice insurance by March 31, 1997. In many cases, HHS has not contacted claimants regarding their claims, and some claimants have filed suit in federal court. Claimants are precluded from filing suit for 6 months unless HHS has denied the claim. For 22 of the 32 claims involving FTCA-covered centers that have resulted in federal lawsuits, HHS had not responded to the claimants or contacted them to discuss a settlement during the 6-month period. HHS officials told us that in many cases they had been unable to obtain documentation and medical reviews needed to assess the merits of these claims and were therefore not prepared to either settle or deny them. DOJ is now responsible for representing the government in these lawsuits. If HHS had achieved a settlement in any of these cases, some of the costs of FTCA administration associated with involving another federal agency, preparing for trial, and defending the case in court might have been avoided. Risk management provides an opportunity to limit financial losses resulting from allegations of improper patient care. It also offers providers a way to improve service to patients, avoid patient injuries, and reduce the frequency of malpractice claims. The health care experts we spoke with consistently promoted risk management as a tool to simultaneously minimize loss and improve the quality of patient care. Although the law extending FTCA coverage to centers does not direct HRSA to provide risk management, HRSA officials acknowledge both the need to minimize the federal government’s potential liability and provide risk management services to centers. HRSA has begun to provide centers with some of these services. However, HRSA is not planning as extensive a risk management program as some private insurance carriers or other federal agencies with FTCA malpractice coverage, such as the Department of Defense and the Indian Health Service. (App. III provides more details on the purpose and potential benefits of risk management for health care facilities.) A wide range of risk management services was offered to health facilities and practitioners by the insurance companies and federal agencies we interviewed. While some provided extensive services—including site inspections, periodic risk reassessments, and telephone hotlines to respond to center concerns—others offered these services on request or to larger facilities. The more commonly offered services included claims tracking, analysis, and feedback on specific incidents, educational seminars, risk management publications, and the opportunity to obtain specific guidance on center concerns. Most of the health center officials we spoke with valued their insurer’s risk management services. Many regarded the opportunity to discuss a new procedure or a potential malpractice claim with a risk manager as the most important feature of their insurer’s risk management plan. Several officials said they were reluctant to cancel private comprehensive malpractice coverage in favor of FTCA because they would then lose the risk management services they have come to rely upon. In contrast, other centers find risk management services are still available from their private insurer if they purchase a supplemental policy to cover gaps in FTCA coverage. Additionally, HRSA has advised centers that the purchase of private risk management services by centers will be an allowable charge to their grant. Recently, HRSA has begun to take steps to provide centers with risk management. HRSA has contracted with the National Association of Community Health Centers (NACHC) to provide telephone consultations with centers regarding FTCA and risk management issues. NACHC may also provide a limited number of special risk management seminars to centers through HRSA-sponsored training. HRSA officials told us that they will obtain a subscription for all FTCA-covered centers to the Armed Forces Institute of Pathology’s annual publication, Open File, which is exclusively devoted to risk management issues. Individually tailored risk management assessments may also be offered to centers through HRSA’s Technical Assistance Program. This assistance would supplement the agency’s periodic site inspections of centers, already a routine component of its grant management process. While HRSA has taken important steps in providing centers with some risk management services, some critical risk management activities—performed by other insurers, including other federal agencies—have been excluded from its efforts. For example, it has not established a policy for providing centers with specific feedback based on their claims experience nor has it instituted a useful claims tracking system, widely regarded by risk management experts as an essential component of managing risk. The experts we spoke to told us that a tracking system provides a way of identifying problem practitioners as well as patterns among practitioners and facilities. While HRSA officials agreed with the importance of these risk management activities, they told us that the initial activities related to the implementation of FTCA for health centers necessarily took priority over the development of a comprehensive risk management plan. Community and migrant health centers are being challenged by increasing financial pressures, jeopardizing their service to large medically needy populations. By opting for FTCA coverage, centers can reduce their malpractice insurance expenditures and redirect these funds to providing needed services to their communities. Malpractice coverage provided by FTCA differs in many ways from that provided by private malpractice insurance coverage. One of the significant differences is the lack of a monetary limitation on liability coverage, which could play a signiftcantant role in determining the federal government’s ultimate cost of providing FTCA coverage to community and migrant health centers and which heightens the importance of a sound risk management plan. As more centers rely on FTCA for malpractice coverage, the federal government’s potential liability will increase as will the need for risk management. Insurers and other federal agencies have employed a variety of risk management practices to limit liability and improve clinical practices. The growth in FTCA coverage offers both the challenge of a greater federal liability to manage and a new opportunity to help community and migrant health centers improve the quality of their care. We recommend that the Secretary of Health and Human Services direct the Administrator of HRSA to develop a comprehensive risk management plan, including procedures to capture claims information and to identify problem-prone clinical procedures, practitioners, and centers. We provided HHS an opportunity to comment on a draft of this report, but it did not provide comments in time for inclusion in the final report. However, program officials provided us with updated claims information and also offered several technical comments based on their review of the draft report, which we have incorporated as appropriate. In addition, we also discussed the findings presented in this report with program officials who generally agreed with the facts we presented and with our evaluation of HRSA’s management of FTCA coverage for community health centers. We are sending copies of this report to the Director of the Office of Management and Budget, the Secretary of Health and Human Services, and interested congressional committees. We will make copies available to others upon request. Major contributors include Paul Alcocer, Geraldine Redican, Barbara Mulliken, and Betty Kirksey. Please call me at (312) 220-7767 if you or your staff have any questions concerning this report. To review HHS’ implementation of FTCA coverage for community health centers, we spoke with officials from HRSA’s Bureau of Primary Health Care in Bethesda, Maryland, as well as the agency’s regional FTCA coordinators. To assess the FTCA claims resolution process and to determine the status of claims filed, we met with and obtained data from the Public Health Service Claims Office, HHS’ Office of General Counsel, and DOJ. However, we did not independently verify the status of these claims. To obtain information on why community health centers do and do not participate in the FTCA program, we interviewed officials from the National Association of Community Health Centers (NACHC) and three state primary care associations. We also interviewed officials from 35 community health centers, including 27 centers with FTCA coverage and 8 centers that were not participating in the FTCA program. To determine the types of risk management services provided to community health centers, we interviewed representatives of seven insurers and three risk management consulting firms providing these services. We also discussed these services with some of the community health center officials we interviewed. We identified the insurance carriers through discussions with HRSA officials in both headquarters and regional offices, community health centers, NACHC, and others knowledgeable about the malpractice market. We selected carriers selling malpractice insurance in a variety of geographic areas, including both coasts, the midwest, and the south. We also selected carriers with significant experience insuring community health centers. We estimate that collectively, these carriers have insured over 300 community health centers against malpractice claims. We also discussed the unique risk management needs of community health care centers with a variety of health care experts. In addition, we contacted the Armed Forces Institute of Pathology and the Indian Health Service to discuss their risk management programs. Alcona Citizens for Health, Inc. (MI) Barnes-Kasson County Hospital (PA) Brownsville Community Health Center (TX) Citizens of Lake County for Health Care, Inc. (TN) Columbia Valley Community Health Services (WA) Country Doctor Community Clinic (WA) Crusaders Central Clinic Association (IL) Detroit Community Health Connection, Inc. (MI) East Arkansas Family Health Center, Inc. (AR) El Rio Santa Cruz Neighborhood Health Center, Inc. (AZ) Erie Family Health Center, Inc. (IL) Grace Hill Neighborhood Health Center (MO) Greater New Bedford Community Health Center, Inc. (MA) Indian Health Board of Minneapolis, Inc. (MN) Kitsap Community Clinic (WA) La Clinica de Familia, Inc. (NM) La Clinica del Pueblo de Rio Arriba (NM) Lamprey Health Care, Inc. (NH) Laurel Fork-Clear Fork Health Centers, Inc. (TN) Lawndale Christian Health Center (IL) Manet Community Health Center, Inc. (MA) Memphis Health Center, Inc. (TN) Missoula City/County Health Department (MT) Model Cities Health Center, Inc. (MN) Ossining Open Door Health Center (NY) Perry County Medical Center, Inc. (TN) Presbyterian Medical Services (NM) Providence Ambulatory Health Care Foundation, Inc. (RI) Sea Mar Community Health Center (WA) Shawnee Health Service Development Corporation (IL) Southern Ohio Health Services Network (OH) South Plains Health Provider Organization, Inc. (TX) Southwest Community Health Center, Inc. (CT) The Clinic in Altgeld (IL) Westside Health Services, Inc. (NY) Risk management offers physicians and other health care practitioners and facilities a means of improving patient services, avoiding patient injuries, and reducing the frequency of malpractice claims. Organizations such as the American Medical Association, the American Hospital Association, the Joint Commission on the Accreditation of Healthcare Organizations, and the Physician Insurers Association of America (PIAA) recognize risk management as an effective tool for minimizing liability and enhancing quality care. The insurers and health care experts we spoke with concurred that risk management provides the underwriter or, in the case of FTCA coverage, the federal government, the possibility of preventing instances of malpractice from occurring and thereby reducing financial liability. They also told us that risk management can help educate physicians and other medical personnel while improving their performance. Many of the center officials we spoke with also valued risk management services. The insurance industry and federal officials we spoke with consistently underscored claims tracking and analysis as one of risk management’s most critical components. Claims tracking and analysis provides a way of identifying patterns in the types of malpractice claims filed against providers. This information may be used to identify facilities or practitioners that pose risks and problem-prone clinical practices. It can also be key to implementing corrective actions, such as selecting a practitioner or an entire facility for other risk management services. Aggregating and analyzing claims data and sharing results with health care providers may reduce the number of claims by bringing to light factors that lead to claims. Analyzing claims made and settled is done by individual insurers, organizations representing groups of insurers, such as PIAA, and federal agencies administering health programs FTCA covers for malpractice claims. Many insurers collect medical malpractice data. Data collected may relate to the cause of claims and their severity, the amounts requested and paid by type of injury, and demographic features of claimants and providers. For example, PIAA, which represents physician-owned or -directed professional liability insurance companies, routinely collects and analyzes data from 21 of its member companies. PIAA has issued special reports on topics such as lung cancer, medication errors, and orthopedic surgical procedures. This information can alert providers to situations that may put them at greater risk for a malpractice claim and increase their awareness of new or continuing problem areas. The federal government also recognizes the value of analyzing claims data as both a risk management tool and a means of improving quality care. The Armed Forces Institute of Pathology (AFIP) performs detailed claims analysis for all branches of the military and other federal agencies, such as the Department of Veterans Affairs, that are covered by FTCA. In addition to conducting studies, AFIP also provides direct feedback and responds to queries from facilities seeking to improve performance and minimize risk. The Indian Health Service (IHS) provides health care services at both hospitals and outpatient facilities. IHS performs its own analysis of claims, although on a smaller scale than AFIP. IHS has tracked claims for 10 years and provides routine feedback to all facilities and practitioners after a claim has been resolved. It has also created a database of all filed claims and has issued reports of its analysis to IHS facilities. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the implementation of Federal Tort Claims Act (FTCA) coverage for community health centers, focusing on the: (1) health centers' use of FTCA coverage; (2) status of claims filed against FTCA-covered centers through March 21, 1997; and (3) Department of Health and Human Services' (HHS) management of FTCA for community and migrant health centers, and its efforts to reduce claims through risk management programs. GAO noted that: (1) the permanent authorization of FTCA coverage, the greater availability of supplemental policies to cover incidents not covered under FTCA, and the reports of some centers already realizing substantial savings have contributed to the willingness of many centers to now obtain FTCA coverage; (2) although the Health Resources and Services Administration (HRSA) required centers to apply for FTCA coverage during the demonstration period, centers were not compelled to cancel their private comprehensive malpractice insurance; (3) although HRSA does not have complete data on center participation during the 3-year demonstration period, it appears that most centers retained their private comprehensive malpractice insurance during this time; (4) because these centers were covered by both FTCA and their private policies, they did not reduce their insurance costs; (5) of the 716 centers eligible for FTCA coverage, 452 have elected this coverage and are now required to cancel their private comprehensive malpractice insurance; (6) despite this level of participation, a significant number of centers have not reapplied for FTCA coverage since its recent extension; (7) as of March 21, 1997, 264 of the 716 centers eligible for FTCA coverage, or 37 percent, had not applied for it; (8) since the demonstration period began in 1993, there have been 138 claims filed against FTCA-covered centers alleging damages of more than $414 million; (9) however, the actual amount of the federal government's liability for these claims is unclear; (10) as of March 21, 1997, only five claims have been settled, with total payments of $355,250; (11) at the recommendation of HHS' Office of Inspector General, HRSA developed a legislative proposal that, if enacted, would limit the federal government's liability to $1 million for claims filed against FTCA-covered centers; (12) by extending FTCA coverage to centers, the federal government has assumed potential liabilities that need oversight and careful management; (13) HHS could improve its administration of FTCA coverage for community and migrant health centers by strengthening data collection efforts and claims management practices; (14) HHS has 6 months in which to either deny a claim or make a settlement offer before a claimant may file suit in federal court; (15) for 22 of the 32 claims that have resulted in federal lawsuits, HHS had not attempted to respond to the claimants during this 6-month period; and (16) risk management services can help centers minimize liability by reducing their financial exposure to claims.
In testimony before the U.S. Senate in March 2000, the Chief of Staff of the Army stated that the Army had to transform to meet current and future strategic requirements. The Army believes that the transformation is necessary to respond more effectively to (1) the growing number of peacekeeping operations and small-scale contingencies and (2) the challenges posed by nontraditional threats such as subnational and transnational terrorist groups. The Army plans to transform its forces over a 30-year period. The first phase of the Army's transformation is to form six IBCTs, the first two of which are being formed at Fort Lewis, Washington. The first of these brigades has been in the process of being formed since fiscal year 2000. The Army's plan is to certify it as achieving its initial operational capability by May 2003, at which time it will be deployable. The second brigade is in its early stages of formation. The Army has programmed funding for six IBCTs and has announced the locations of the remaining four. Under current plans, all six brigades are to have been formed, equipped, trained, and ready to deploy by 2008. The Army is also considering how it might accelerate the fielding of the last three brigades so that all six can be fielded by 2005. Additionally, the 2001 Quadrennial Defense Review stated that an IBCT be stationed in Europe. Because this was not in the Army's plans, it is considering establishing an IBCT in Europe. Taken together, the IBCTs represent what the Army terms its Interim Force because it begins to meet the Army’s rapid deployment needs for the next decade. Beginning in 2008 and continuing beyond 2030, the Army plans to transition to its Objective Force. During this period, all Army forces, including the IBCTs, are to be transformed into new organizational structures operating under new war-fighting doctrine. Their new combat systems are to be lighter and more mobile, deployable, lethal, survivable, and sustainable than current systems. Four competing research and development teams have completed work on alternative designs for these future combat systems and a contract has been awarded to a single lead systems integrator. As the Army transitions to its Objective Force, it plans to maintain the organizational designs of a portion of its existing combat force, which it terms its Legacy Force, and to modernize selected equipment in this force. This equipment includes such major weapons systems as the Abrams tank, Bradley Fighting Vehicle, and Black Hawk helicopter. Figure 1 depicts these weapons systems. This selective modernization is intended to enable the Army to maintain capability and readiness until the future combat systems are delivered to the Objective Force. The Army expects the IBCT to provide a force capability that it does not currently have: a rapidly deployable early-entry combat force that is lethal, survivable, and capable of operating in all types of military operations, from small-scale contingencies like the Balkans’ missions to a major theater war. It also expects to use the IBCT to test new concepts that would be integrated into the Army’s future Objective Force. Many of these concepts are still under development. The IBCT has been optimized for small-scale contingencies, being specifically designed to operate in a variety of terrains, including mountains and urban areas. Yet it is expected to also be capable of participating in a major theater war and addressing both conventional and nonconventional threats. As an early-entry force, the brigade is expected to have sufficient built-in combat power to conduct immediate combat operations upon arrival in theater if required. It is designed to supply its own needs for 72 hours, after which time it would need a source of resupply. The IBCT is intended, in general, to fight as a component of a division or corps but also be capable of operating separately under the direct control of a higher headquarters, such as a joint task force. The Army expects that in many possible contingencies, the IBCT could initially be the single U.S. maneuver component under a higher headquarters. In a major theater war, the IBCT under current plans would fight as a subordinate maneuver component within a division or corps. However, the brigade would be augmented with additional mission-specific combat capabilities such as armor, aviation, and air defense artillery. The Army, however, is considering the need for an Interim Division structure that would include IBCTs as the maneuver forces because some analyses have concluded that placing an IBCT with its differing design into an existing infantry or armored division might impede the division’s ability to achieve its full combat capabilities. The Army expects to complete the new divisional concept by spring 2003 if the Chief of Staff decides to go forward with it. The IBCT is organized primarily as a mobile infantry organization and will contain about 3,500 personnel and 890 vehicles. The brigade includes headquarters elements; three infantry battalions, composed of three rifle companies each; an antitank company; an artillery battalion; an engineer company; a brigade support battalion; a military intelligence company; a signal company; and a unique Reconnaissance, Surveillance, and Target Acquisition squadron. This squadron is expected to be the IBCT’s primary source of combat information through the traditional role of reconnaissance, surveillance, and target acquisition. However, the squadron is also designed to develop a situational understanding of other elements within the operational environment, including political, cultural, economic, and demographic factors. This awareness is expected to enable the brigade to anticipate, forestall, or overcome threats from the enemy. The squadron offers the IBCT a variety of new systems and capabilities that are generally not contained in an infantry brigade including manned reconnaissance vehicles and ground reconnaissance scouts, counterintelligence, human intelligence collectors, unmanned aerial vehicles, ground sensors, and radars. Moreover, the squadron’s all-weather intelligence and surveillance capabilities, coupled with the digitized systems, are designed to enable it to maintain 24-hour operations. All six of the IBCTs are planned to be equipped with new light-armored wheeled vehicles, termed interim armored vehicles, which are significantly lighter and more transportable than existing tanks and armored vehicles. These vehicles include ten types of vehicles that share a common chassis— infantry carriers, mobile gun systems, reconnaissance and surveillance vehicles, and others. These wheeled vehicles are expected to enable the IBCT to maneuver more easily in a variety of difficult terrains. The first vehicles were scheduled for delivery to the first brigade in April 2002. Meanwhile, the brigade has been training on substitute vehicles, including 32 Canadian infantry vehicles and German infantry carrier and nuclear, biological, and chemical vehicles. These vehicles approximate the capabilities of the interim armored vehicles. Figure 2 depicts two of the interim armored vehicles. The brigade’s digitized communications are designed to enable brigade personnel to “see” the entire battlefield and react before engaging the enemy. In addition to light armored vehicles equipped with digital systems, the IBCT is expected to rely on advanced command, control, computer, communications, intelligence, surveillance, and reconnaissance systems purchased from commercial or government sources. The squadron’s all- weather intelligence and surveillance capabilities, together with its digitized systems, are intended to enable it to maintain 24-hour operations. The Army expects this awareness to enable the IBCT to anticipate, forestall, or overcome threats from the enemy. The IBCT's planned capabilities also differ in other ways from those found in traditional divisional brigades. For example, the Army determined that achieving decisive action while operating in various types of terrain, including urban settings, would require the brigade to possess a combined arms capability at the company level, rather than at the battalion level. Focusing on dismounted assault, companies are expected to support themselves with (1) direct fire from weapon systems on the infantry carrier vehicle and from the mobile gun system and (2) indirect support through mortars and artillery. This combined arms capability is to be reinforced through the Army’s current development of a training program aimed at developing soldiers with a wider range of skills as well as leaders who can adapt to many different kinds of conflict situations. The Army expects the IBCT to rely on new sustainment concepts that will permit it to deploy more rapidly because it will carry fewer supplies and have lighter vehicles, resulting in less weight to be shipped. Due to its smaller and lighter vehicles, the Army expects that the IBCT will be transported within the theater by C-130 aircraft. There are more of these aircraft, and they provide greater access to airstrips than would be possible with larger C-17 and C-5A aircraft that are intended for use in deploying an IBCT from its home station to the theater. Figure 3 shows a C-130 aircraft. The IBCTs will serve an additional purpose in that they will test and validate new doctrine and organizational structures as well as new combat training and leadership development concepts. As such, the Army expects the formation and operation of the IBCT to provide insights for subsequent transformation. In September 2001, Army officials announced the possibility of accelerating the formation of the last three IBCTs. Under this proposal, all six IBCTs would be formed by 2005, 3 years earlier than planned. A key to acceleration is the ability of the manufacturer to deliver the vehicles ahead of the current delivery schedule. According to this schedule, the first IBCT would begin receiving its vehicles in April 2002. The second brigade would begin receiving its vehicles in February 2003. The Army cannot acquire vehicles for more than the second IBCT until it meets certain legislative requirements. The Army must compare the costs and operational effectiveness of the Interim Armored Vehicle with its existing vehicles before it can acquire the Interim Vehicle for the third IBCT. The Army must also complete an operational evaluation of the first IBCT. The evaluation must include a unit deployment to the evaluation site and execution of combat missions across the spectrum of potential threats and operational scenarios. The Army cannot acquire vehicles for the fourth and subsequent IBCTs until the Secretary of Defense certifies that the operational evaluation results indicate that the IBCT design is operationally effective and suitable. The significance of this is that the Army would need to complete this evaluation and authorize vehicle production for the fourth brigade by June 2003 for the Army to accelerate formation of the fourth and subsequent brigades, as has been proposed. This is because the manufacturer must have 330 days of lead time to produce and deliver the vehicles. Our visits to the unified combat commands covering Europe, Southwest Asia, the Pacific, and the United Nations Command/U.S. Forces in Korea confirmed their support for the Army’s plans for the IBCT. They generally agree that the current Army force structure does not meet their requirements for a rapidly deployable, lethal, and survivable force. According to the CINCs, if the IBCTs are formed and deployable as planned, they should fill the perceived near-term gap in military capability. The CINCs view the IBCT as a means to provide them with a broader choice of capabilities to meet their varied operational requirements rather than a substitute for current force structure. However, CINC planners need information about the brigade’s deployability and other limitations for planning purposes. Their anticipated uses of an IBCT vary from serving as an early entry force within the European Command to conducting reconnaissance and securing main supply routes in Southwest Asia for the Central Command. To ensure that the CINCs’ needs and concerns are addressed as the transformation evolves, the Army has created a forum that meets periodically with their active participation. Our discussions with CINC officials confirmed their agreement with Army conclusions about a gap in military capability. In announcing the Army’s plans for its transformation in October 1999, the Army’s Chief of Staff pointed to this gap in current war-fighting capabilities and the IBCT’s planned ability to rapidly deploy. He noted that although the Army can dominate in all types of conflicts, it is not strategically responsive. The light forces can deploy within a matter of days but lack combat power, tactical mobility, and the ability to maintain sustained operations. On the other hand, armor and mechanized forces possess significant combat power and are able to maintain sustained operations but cannot deploy rapidly. CINC officials cited past military operations that pointed to this gap. For example, in the Persian Gulf War, the Army deployed a light infantry force—the 82nd Airborne Division—as the early entry force to deter Iraq and defend Saudi Arabia. However, there is general agreement that this force did not possess the anti-armor capability to survive and stop a heavy armored attack. Moreover, it took 6 months to position the heavy forces and associated support units and supplies needed to mount offensive actions against Iraq—a time frame that might not be available in the future. The urban operation in Mogadishu, Somalia, in October 1993 that resulted in the deaths of 16 U.S. soldiers was also mentioned to illustrate the need for a force that is lethal, is maneuverable, and provides sufficient protection to U.S. forces. The difficulty in maneuvering heavy vehicles in peacekeeping operations in the Balkans was also cited by CINC representatives as a reason why lighter, more maneuverable vehicles are needed. CINC officials pointed out many features of the IBCT that they felt would address the existing capability shortfalls. These features included its planned ability to deploy within 96 hours anywhere in the world and to provide a formidable, survivable deterrent force that could bring combat power to bear immediately if necessary. Also mentioned was its expected ability to rapidly transition from being a deterrence, to serving in a small- scale contingency, to fighting in a major theater of war in the event operations escalated. CINC officials also commented on the IBCT’s enhanced capabilities for situational awareness. Situational awareness is the ability to see and understand the battlefield before coming into actual contact with the opponent through the use of advanced integrated systems that provide command, control, communications, computer, intelligence, surveillance, and reconnaissance capabilities. This expected improvement in awareness should provide a major comparative advantage over potential enemies. They also noted that the IBCT would support their rapid deployment needs by using interim armored vehicles that would be deployable within theater by C-130 aircraft, which are more readily available, better able to access small airfields, and therefore better able to be moved around the battlefield. CINC officials also pointed out that the IBCT relies on a family of vehicles with a common platform, which reduces logistics and support requirements through commonality of spare parts, fuel, and lubricants. While generally positive about the IBCTs, CINC officials cautioned that many questions remain about whether these brigades will be able to achieve all their envisioned capabilities, especially by the time they are certified for deployment. Concerns expressed to us included whether the IBCT would actually be available to deploy anywhere in the world in 96 hours, given many potential competing demands for mobility assets; what combat capability shortfalls might exist in the IBCT until it receives all its planned vehicles and weapon systems; whether new logistics concepts would succeed in reducing supply tonnages sufficiently to achieve rapid deployment and intratheater goals; when the vehicles that need further development, such as the mobile gun system and the nuclear, biological, and chemical vehicle, would be available; and whether the IBCT will be able to provide sufficient combat power when heavy forces are needed. CINC operational and logistics planners need specific data regarding the brigade’s combat capabilities and logistics factors that are not yet available. They emphasized that it was important to have these data to adequately integrate the IBCTs into their plans. If, for instance, certain planned capabilities would not be in place when the first IBCTs become deployable, planners would need to know this so that they could plan for mitigating any risks that this might create. For example, Army officials in Korea related their concern that the IBCT will not include the mobile gun system until after the Army certifies the brigade as operationally capable. In the Korean theater, the capability of this weapon system is a high priority. CINC officials raised additional concerns about the IBCT’s support on our visits. Logistics planners in Korea said the amounts of fuel, water, and ammunition used by the brigade need to be analyzed to determine what the theater needs to have when a brigade arrives. Although Korea contains significant support resources, logistics planners need to know the unit’s unique and specific support requirements. In the Pacific Command, questions remain regarding the adequacy of the IBCT’s 3-day supply of medical items. The CINCs’ specific requirements and planned use for the IBCTs varies depending on the requirements of their respective areas of operational responsibility. (See fig. 4.) Officials in both Europe and Korea expressed their views that IBCTs could be used effectively in their theaters of operation. Officials of the U.S. Central Command, which covers Southwest Asia, said that an IBCT had utility in their theater—notably Africa—where fighting in urban terrain might occur. According to Pacific Command officials, their theater could use Army forces that are more deployable, lethal, and sustainable than currently assigned, especially for use in the urban areas prevalent in that theater. CINC representatives generally did not expect the IBCT to substitute for forces currently assigned. Rather, they saw the IBCT as providing them with a broader choice of capabilities to meet their operational needs. The European Command wants the Army to station an IBCT in its area of responsibility. As noted earlier, the most recent Quadrennial Defense Review stated that an IBCT would be stationed in Europe. Command officials emphasized that the planned characteristics of the IBCT—rapid deployment, enhanced situational awareness, tactical mobility, and lethality—are key to the requirements of the European theater. Further, the expected intelligence-gathering capabilities of the IBCT reconnaissance squadron will exceed that of the Command’s currently assigned divisions. This capability is a necessity for missions such as those in the Balkans. Recognizing strategic and tactical mobility deficiencies from past and ongoing contingency operations in the Balkans, in the year 2000 Command officials in fact created a rapid reaction force with some of the same characteristics as the IBCT. This rapid reaction force is composed of both light and heavy forces and is expected to deploy within 24 hours after being alerted. By using on-hand forces and equipment, the European Command has created an immediate reaction force that possesses some of the IBCT's capabilities. However, this reaction force lacks the intelligence, reconnaissance, and surveillance systems found in the IBCT that allows greater situational understanding of the battlefield. Furthermore, the force is not equipped with the new interim armored vehicles, which allows for a commonality among sustainment requirements and training. Command officials said that an IBCT would complement this rapid reaction force by providing an early entry force that could bring more combat power to bear. The Central Command’s primary area of responsibility is Southwest Asia and is one of two geographic areas that have required war planning for a major theater war. One official noted that an IBCT could provide significant capability to the CINC's theater engagement plans by providing mobile training teams and other military-to-military missions with developing nations. Command officials stated that the IBCTs would offer new capabilities to their theater in certain circumstances. For example, had an IBCT been available during the Persian Gulf War, the IBCT could have been used rather than the 82nd Airborne Division since the IBCT's planned anti-armor capability far exceeds that of a light division. Moreover, the IBCT would be useful in conducting missions such as reconnaissance and security and securing main supply routes. Command officials stated that an IBCT would have been valuable had it been available for the urban mission in Mogadishu, Somalia, during October 1993. They added that the IBCT could also be used for evacuating noncombatants. Command officials noted that even though the IBCT offers them new capabilities, they would not substitute it for the heavy combat forces that are required for a major war such as the Gulf War. Army officials in Korea have stated that they want to station an IBCT in Korea. According to one senior Army official in Korea, the IBCT would provide the maneuverability and combat power needed to operate in the mountains and the increasingly urbanized areas of Korea. War planners in Korea expressed their view that the IBCT is optimized to meet the operational requirements of the Korean peninsula and that the IBCT would have more utility than Bradley Fighting Vehicles and M1 tanks. They explained that these latter weapons would have to be used primarily as stationary weapon platforms because the terrain and the sprawling urban terrain limit their use. They noted that IBCTs are more mobile than light forces and once equipped with all their new weapon systems will have good lethality and be survivable. Further, according to CINC officials, the theater will not lose or diminish its combat capability by substituting IBCTs for heavy forces. While Pacific Command officials noted that Army forces currently assigned to the theater are capable of meeting most CINC operational requirements, an IBCT would bring certain desirable capabilities to the theater. For example, an IBCT would provide increased situational awareness, tactical mobility, and firepower currently unavailable within assigned Army forces. Command war planners explained that the IBCT’s communications capabilities would help eliminate some communications shortfalls between and among the Command’s service components. Moreover, an IBCT could be more effectively employed for stability and support operations in the Pacific, providing a rapid deployment capability. They mentioned that the planned capabilities of the IBCT offer both (1) considerable flexibility by having substantial nonlethal capabilities for use in stability and support missions and (2) substantial lethality for more intense operations such as peace enforcement. Command officials noted that the IBCT’s interim armored vehicles would provide better protection for infantry forces than can be provided by currently assigned infantry forces. The Army has established a CINC Requirements Task Force that provides a forum for the commanders to voice their current and future requirements. Army officials assigned to the combatant commands stated that the quarterly meetings have allowed the CINCs to ensure that their concerns are heard. Issues raised are then forwarded to the Army staff for resolution. For example, the task force has addressed issues such as how the U.S. Pacific Command plans to employ IBCTs in that theater as well as reintegrating the Army’s first IBCT into the operational plans. Based on discussions with combatant command officials, the perceived value of the forum is such that participation at the quarterly meetings is generally obligatory for command representatives. Fort Lewis officials said that they are generally satisfied with the progress being made to date in fielding the first IBCT and believe the IBCT is on track to meet its certification milestone of May 2003. However, the Army has encountered challenges in forming the IBCT at Fort Lewis. One challenge to overcome is a combat capability shortfall in the first IBCT when it is certified. Specifically, certain specialized interim vehicles, such as the mobile gun system, will not be available. Further, the interim armored vehicle delivery schedule has compressed the time available for soldiers to train on the vehicles; personnel turnover resulted in more time spent on digital training than planned; and the 96-hour deployment capability, while a goal rather than a requirement, will not be attained by the first IBCT. Army planners are still developing plans on how the IBCT will obtain needed logistics support in the theater after its planned 72-hour supply is depleted. Other challenges relate more to the first IBCT; its home station, Fort Lewis; and potentially, future home stations. These challenges include retention of skilled soldiers and the increased costs to provide maintenance support and facilities at Fort Lewis and ultimately to subsequent IBCT home stations. The first IBCT will not achieve all designed combat capabilities by the time it reaches its certification date because it will not have all the interim infantry vehicle variants. One key variant it will lack is the mobile gun system, which is expected to be more capable than the system currently being used. Until the first IBCT is fully equipped with its complement of interim armored vehicles, it will be limited in its designed capabilities by using in-lieu-of vehicles. Specifically, until the mobile gun system vehicle and the nuclear, biological, and chemical vehicle arrive, the IBCT cannot fully meet its planned war-fighting capabilities. These vehicles— particularly the mobile gun system—are critical to meet the expectations of the war-fighting CINC in Korea, as well as the Army’s transformation plans. Based on the current delivery schedule, at the time of its operational certification in May 2003, the first IBCT will have about 86 percent of its interim armored vehicles and the remaining 14 percent will be approved substitutes. Army regulations allow a unit to use substitute equipment and vehicles to meet its initial operational capability date. The first mobile gun systems and nuclear, biological, and chemical vehicles will be delivered beginning in 2004. The Army has encountered training challenges due to the delivery schedule for the interim armored vehicles and the need for extensive training on digital systems. Despite these challenges, training officials believe that the IBCT has made great strides in achieving training goals, including the transformation goal of developing soldiers who are skilled in a wide range of tasks so that they can transition quickly from small-scale contingencies to higher levels of combat and the reverse. Because deliveries of the interim vehicles are not scheduled to begin until April 2002, the IBCT has been dependent on substitute wheeled infantry carriers loaned by the Canadian and German governments. These vehicles have been passed from unit to unit, thereby limiting training to company level and below. Training officials said that although they were disappointed that they did not have sufficient vehicles to train as a battalion or brigade, a hidden benefit was that the IBCT was able to focus more training on individual and dismounted infantry skills instead. According to a senior Fort Lewis official, subsequent brigades should not experience the same training limitations as the first brigade unless, for any unforeseen reason, the contractor’s expected delivery schedule cannot be met. However, the first brigade will experience a further training challenge in that the revised delivery schedule will compress the time available to train at the battalion and brigade level to just 3 months. Fort Lewis training officials would have liked to have a full 6 months to train after receiving most of the vehicles. However, a senior Fort Lewis official also told us that he is confident that all the training requirements will be accomplished in the lesser time available. The need to train IBCT soldiers in digital systems has posed other challenges. Digitization provides a critical situational awareness capability to the IBCT similar to that afforded units at Fort Hood, Texas, under the Army’s Force XXI program. These systems use sophisticated information technology, that allows personnel in the IBCT to achieve superior battlefield information enabling them to engage the enemy long before coming into contact. IBCT soldiers train with many digitized systems and must maintain specific levels of proficiency. Maintaining proficiency in these systems has been challenging due to personnel turnover in the IBCT. The Army does not currently have a formal digital sustainment-training program for individual soldiers and leaders. Fort Lewis officials cited their concerns that without a digital sustainment-training program, soldier skills will quickly erode. The Army Training and Doctrine Command is currently developing an individual digital sustainment-training program for the two brigades, which may be applicable to the entire Army. However, the Army has not yet implemented initial formal training in digitized systems within its institutional centers and schools; as a result, many individual leaders and soldiers arrive at the IBCT unit without any prior experience with the hardware or software. The Army plans to begin teaching digitized systems at its schoolhouses in 2004, but even then, the training will only be an initial overview. As part of the Army's multi-skilled soldier concept, the Army’s Infantry branch has combined the occupational skill specialties of infantryman, fighting-vehicle infantryman, and heavy anti-armor weapons infantryman into a single consolidated specialty and will train them in a wide range of infantry skills. Army officials spoke favorably about this concept and said that concerns that the Army may be requiring too many skills and capabilities for individual soldiers to absorb have not been borne out in their experience so far. In their view, individual soldiers at Fort Lewis had adapted well to the requirements of the digitized systems and multiple combat skills needed for IBCT missions. They are generally satisfied with the progress being made to date and believe that the IBCT is on track to meet its certification milestone of May 2003. Figure 5 depicts a schematic of this multi-skilled soldier approach. The Army’s ability to meet its rapid deployment goal for the first IBCT will depend on availability of aircraft to transport unit equipment, completed infrastructure improvements at Fort Lewis specifically, and Air Force certification of the IBCT as deployable. In commenting on the draft report, Army officials stated that Air Force certification of the interim armored vehicle is currently underway with weight and load certification scheduled for May 2002. Initially the Army announced that the IBCTs would be capable of deploying within 96 hours anywhere in the world, but the Army has since made it a goal for the IBCTs rather than a requirement. It has not established a substitute deployability timetable for the first IBCT. However, under current plans, the Army retains the 96-hour deployment requirement for the future transformed units entering the Army’s force following formation of all six brigades in 2008. Other requirements for this future force are to be able to deploy a division in 120 hours and five divisions in 30 days. It appears that this 96-hour deployability goal for the first IBCT will not be achieved. Army transportation planners have determined that it would take 201 C-17 and 51 C-5 aircraft to transport all of the IBCT’s equipment to a distant theater. (See fig. 6.) Army officials have stated that with all the competing demands for these aircraft, the Air Force currently does not possess sufficient numbers of them to meet the 96-hour goal for the IBCTs. Additional analyses would be needed to evaluate other ways to supplement this capability, such as through the forward positioning of some materials or the use of commercial aircraft. Strategic airlift is an Air Force responsibility and therefore beyond the purview of the Army. The installation where an IBCT is located will dictate the additional infrastructure requirements necessary to deploy the brigade. In October 2000, the Army’s Military Traffic Management Command reported in its Army Transformation study that the existing infrastructure at Fort Lewis and McChord Air Force Base could not meet the Army’s requirements for deploying the IBCT. The study identified five projects at the air base and Fort Lewis that needed to be constructed or upgraded at an estimated cost of about $52 million. Since the publication of the report, the Army has funded four of the five projects at a cost of more than $48 million and begun one of the projects. The remaining project requires improvements to deployment ramps at McChord Air Force Base. According to Army officials, the remaining project has not been funded and will most likely not be completed before the Army certifies the IBCT as deployable in May 2003. Another impediment to achieving this goal is the Air Force’s certification that the IBCT and all its equipment items can be loaded on and deployed by aircraft. The Air Force cannot certify the unit until the vehicles are fielded and loaded aboard the aircraft in accordance with combat mission requirements. The fiscal year 2002 National Defense Authorization Act requires the Secretary of the Army to conduct an operational evaluation of the first IBCT and the Secretary of Defense to certify that its design is operationally suitable and effective. The evaluation is to include deployment of the brigade to the site of the evaluation. Generally, the IBCT cannot be deployed outside the United States until this requirement is met. A successful evaluation will be necessary if the Army is to achieve its goal of having six IBCTs by 2008. Army officials recognized early on that some form of personnel stabilization policy for the IBCTs might be needed to provide sufficient continuity of leadership and training to the brigade. However, the delay in setting up the policy and certain exemptions from the policy have led to more turbulence than officials would have liked. They believe that the personnel turnover may have diminished training effectiveness in some instances and may have led to devoting more time than they could afford to digitization training. Officials explained that the need for stabilization stems from the unique nature of the training being done at Fort Lewis and from the normal Army rotational policy that generally has personnel rotating between assignments in 2 years or less. In short, when the trained personnel rotate out of the IBCT, they take their training with them; but no equally trained personnel are available to rotate in. Consequently, the IBCT requires a constant program of providing basic training to incoming personnel on digital equipment, which is available only at Fort Lewis or Fort Hood. Moreover, because this skill is perishable, periodic refresher training is also required. Similarly, the IBCT is training to future war-fighting concepts and doctrine and new concepts for leadership development. Finally, the first IBCT expects to begin receiving some of its interim armored vehicles, which are not available elsewhere in the Army. These unique training requirements argue for more continuity than can be achieved through the normal Army rotational policies that create a constant turnover of personnel within a 24-month period. Recognizing this need for more continuity, Fort Lewis officials expressed to Army headquarters their concern that permitting normal policies to remain in place would adversely affect the IBCT’s readiness and ability to achieve certification on time. In response, the Department of the Army established a formal stabilization policy for the IBCTs in May 2001. Except for certain exemptions under this policy, soldiers must remain in an IBCT for 1 year following certification of the brigade’s operational capability. By stabilizing its soldiers, the unit had hoped to reduce the amount of time it has to spend on training soldiers new to the IBCT on digital and other specialized equipment. Unfortunately, the stabilization policy has not been as effective as officials had hoped. First, the stabilization policy was not in place until May 2001, and by then, many IBCT soldiers had already begun leaving the unit under normal Army rotational procedures. As a result, IBCT trainers spent much of the year repeating their training to new soldiers. A second problem in the stabilization policy’s effectiveness stemmed from the exemptions that are allowed under the policy. For example, soldiers are allowed to rotate out of an IBCT to attend a required school, when promoted, or they can elect to leave an IBCT when they come up for reenlistment. Fort Lewis officials have been encouraged by the fact that IBCT soldiers re-enlisted in fiscal year 2001 at higher rates than those achieved by either of the brigade’s higher headquarters—I Corps at Fort Lewis and Forces Command (FORSCOM). As shown by figure 7, all three organizations achieved over 100 percent of the retention goals set by the Army. Officials noted, however, that IBCT soldiers who have elected to remain in the Army have not necessarily elected to remain in the IBCT. As shown by figure 8, whereas 34 percent on average of I Corps soldiers elected to remain in their units, only 27 percent of IBCT soldiers elected to stay with the IBCT. Moreover, despite the acknowledged need for continuity in the IBCTs, officials have not been capturing data on the reasons why IBCT soldiers are re-enlisting to leave the brigade early and therefore lack information that could help them reduce personnel turbulence. Further, data are not available to determine which re-enlistment options IBCT soldiers are choosing other than remaining in the unit. Fort Lewis officials said that the problems with stabilization may not be as severe with subsequent brigades since the stabilization policy will be in effect from the beginning, unlike the first brigade when the policy was not instituted until months after its formation began. As a result, Army officials anticipate that these latter brigades will experience fewer departures. Personnel turbulence related to reenlistments would become more significant if the brigades experience slippage in their certification dates and lose more soldiers to re-enlistment transfers. The Army specifically designed the IBCT to have fewer support personnel, fewer supplies, and lighter vehicles so that the brigade could be quickly deployed. As a result, the IBCT cannot provide all its own support and requires installation support when located at its home station and other outside support after 72 hours once deployed. In addition, the home station must provide additional and costly facilities for that support. The IBCT is designed with an austere support battalion that contains fewer mechanics to support and maintain its vehicles. IBCT battalion commanders pointed out, however, that the number of vehicles to support has remained the same, even though the number of mechanics has been reduced by two-thirds. Therefore, the IBCT is capable of conducting only about one-third of its vehicle maintenance requirements. As a result, the IBCT must depend on its home installation for scheduled maintenance support. Fort Lewis addressed this capability limitation by hiring contractors and temporary employees to meet the IBCT support requirements. Fort Lewis officials estimate the IBCT’s recurring maintenance requirements at about $11.1 million a year. After being deployed for 72 hours, the IBCT must be supported by other organizations due to its streamlined support battalion and, under transformation concepts, must “reach” for this support. Under the reach concept, the IBCT is expected to request fuel, ammunition, food, spare parts, water, and other supplies through an integrated distribution system by a linked communications network that includes the IBCT home station, contractor support, and multinational or foreign national commercial systems. Army logistics planners have not yet determined how all this will work. Further, in the interim, the support battalion logistical systems are not yet integrated and lack a dedicated secure network interface to the Army’s computerized Battle Command System. As a result, IBCT soldiers are being temporarily used as couriers to relay logistics data between headquarters. The Army’s immediate solution to this challenge may be to increase the IBCT support battalion personnel. For the long term, the Army is developing a system software fix. Providing support to IBCTs will require Army installations to provide new and costly facilities to meet IBCT requirements. The extent and cost of needed improvements at the other installations will vary widely depending upon the location. Army planners noted that it takes at least 3 to 5 years to plan and construct maintenance and other needed infrastructure facilities and that therefore it will be important to develop these plans as soon as possible. Moreover, Army officials have determined that at a minimum, future IBCT home stations will require a mission-support training facility, a fixed tactical Internet, ammunition igloos, and digital classrooms. Examples of long-term requirements include live-fire ranges, maneuver- training areas, mock villages for urban training, and deployment facilities. Figure 9 shows the facility constructed at Fort Lewis to train soldiers in urban warfare techniques. At Fort Lewis and Yakima Training Center, existing support facilities—such as barracks, motor pools, ammunition storage facilities, and training ranges—need to be upgraded or constructed. To meet IBCT training needs, Fort Lewis converted an existing building to a mission-support training facility, which accelerated the normal new construction timeline. However, all support requirements have not yet been funded. For example, Fort Lewis has requested about $10 million for IBCT communication infrastructure requirements that include a secure fiber optic upgrade to link to McChord Air Force Base. Installations also need the ability to integrate digitized systems between home stations and training centers. After the Army announced its planned transformation, the Army Chief of Staff designated the U.S. Training and Doctrine Command as the lead agent for transformation. The Command in turn established the Brigade Coordination Cell (BCC) at Fort Lewis. Its mission is to ensure successful formation of the first two IBCTs at Fort Lewis, synchronize efforts between FORSCOM and the Training and Doctrine Command, and provide insight on Army Battle Command System architecture. The BCC is empowered to directly coordinate with other Army major commands and agencies. It provides a centralized link between the IBCT and a variety of Army organizations responsible for doctrine, training, organization, material, and leadership development. Fort Lewis officials emphasized to us that resolving some of the challenges they are facing points to the need for subsequent installations to establish some sort of mechanism, such as a Brigade Coordination Cell, to deal with the many issues that will inevitably arise. The BCC is designed as a matrix organization and conduit for feedback between various Army organizations pertaining to training, equipment, and logistics. IBCT soldiers as well as analysts from the BCC, the Army Test and Evaluation Command, and the Center for Army Lessons Learned evaluate and validate training doctrine provided by the Infantry and Armor schools. After training exercises, IBCT commanders and soldiers as well as the appropriate Army agencies provide informal and formal lessons- learned data to the cell. The BCC communicates these data to the doctrine writers for their use as they develop the training support packages for squad-to brigade-level collective tasks and formulate conceptual guidance for use by the IBCT commanders. Cell personnel are a part of the working groups created to solve issues in training, deployment, and logistics. A representative from the Army Materiel Command coordinates the vehicle fielding and its associated new equipment training between the IBCT and the civilian contractors. The BCC supplements an existing staff hierarchy. It provides staff enforcement and support for the I Corps staff while existing external to the Fort Lewis chain of command. The BCC is not a higher headquarters staff for the IBCT. The cell’s focus is the same as its mission—to successfully deliver the first two IBCTs to the Army. Senior Fort Lewis officials have stated that the BCC has proven to be a valuable means of coordinating activities related to brigade formation and has offered several important benefits. For example, they noted that some of the difficulties that have arisen have been time-consuming to resolve. The existence of the BCC has relieved such burdens from brigade operations personnel so that they could concentrate more on their substantive work, such as training. The BCC also acted as a communication intermediary between the IBCT and the institutional schoolhouses to develop training doctrine for the brigade’s new mission requirements. In addition, the BCC relieved Fort Lewis from some of the public affairs requirements. The acknowledged benefits of the BCC have led Fort Lewis officials to conclude that a similar organization may be needed at subsequent locations. In accordance with Army regulations, the Army routinely documents the lessons it learns from battles, projects, and reorganizations using memorandums, after-action reports, messages, briefings, and other historical documents. Various organizations traditionally chronicle Army strengths and weaknesses with respect to organization, peacekeeping missions, and wartime operations. During our review, we determined that while fielding the initial IBCT at Fort Lewis, the Army learned valuable lessons that would be critical to future IBCT formation. These lessons were captured and communicated in a variety of ways. However, they were not always forwarded to the Center for Army Lessons Learned, as required, for retention. Further, there is no central location or database where all relevant IBCT lessons learned are available for research. Without having the lessons learned available, the Army may repeat mistakes in fielding subsequent brigades and may lose opportunities that could help it field subsequent brigades more efficiently. Army Regulation 11-33 designates the Center for Army Lessons Learned as the focal point for its lessons-learned system. The regulation stresses that all Army entities are to forward appropriate analytical data, including after- action reports, to the Center. After-action reviews are structured discussions among commanders and soldiers after military exercises to determine what went right or wrong and what can be improved. However, it appears that the Army is not taking full advantage of this repository to capture all relevant IBCT lessons learned. For example, we found that organizations that have played important roles in the initial brigades’ formation are all independently chronicling IBCT fielding information. Furthermore, there is an indication that all lessons learned are not being forwarded to the Center. For example, in May 2001, the Army Test and Evaluation Command published two independent reports that assessed IBCT training events at the squad and platoon levels at Fort Lewis. These reports contained analyses and lessons-learned data about training exercises, equipment, and tasks. The Test and Evaluation Command reports stated that the after-action reviews identified significant issues in conducting adequate equipment training. However, the reports are available from the Test and Evaluation Command, not the Center for Army Lessons Learned. The Center for Army Lessons Learned published one newsletter dated July 2001 that identified some lessons learned and issues concerning the IBCT. This information was compiled from subject matter experts’ observations during training events such as the Senior Leader and Tactical Leaders Course, digital equipment training, and news articles printed in professional publications. Center officials stated that as a result of the terrorist attacks that occurred on September 11, 2001, homeland security has become the Center’s primary focus, not the IBCTs. Although the Center intends to publish a second newsletter addressing the support concepts and requirements for the IBCT, it does not anticipate publishing it until later in 2002. An official at the Center for Army Lessons Learned said that information comes in sporadically from disparate sources. Although fielding of the IBCTs is no longer a Center priority, it intends to continue collecting lessons learned and historical information regarding the fielding of the IBCTs and to publish subsequent newsletters as appropriate. Officials at Fort Lewis, at the behest of FORSCOM, hosted an Information Exchange Conference, from November 27 to November 29, 2001, to provide a forum to communicate IBCT lessons learned to officials who will be overseeing formation of subsequent IBCTs as well as to officials from organizations such as Army headquarters, U.S. Army Europe, U.S. Army Pacific, and the National Guard Bureau. At this conference, Fort Lewis officials noted the challenges that they had faced in several areas. The problem areas included personnel turnover and stabilization, digitization training, classroom shortages, issues related to maintenance and support, budget shortfalls related to vehicle maintenance, difficulties related to equipment turn-in, and deficiencies in installation infrastructure. Other lessons learned concerned information technology requirements and the need to establish working relationships throughout the Army. Fort Lewis officials told us that they hoped that the conference attendees would use these lessons learned as they plan and budget for the subsequent brigades at their locations starting in fiscal years 2004 and beyond. However, it did not appear that these valuable lessons learned would necessarily be readily available for future use. We were told, for example, that FORSCOM would maintain copies of the various slide presentations given at the conference on its Web site for about 12 days. Moreover, there was no plan to submit this information to the Center for Army Lessons Learned for later availability to interested officials of subsequent brigades. While Army officials emphasized that lessons learned are being discussed at all levels throughout the Army, one official commented that he was waiting for the Center for Army Lessons Learned to contact him regarding the lessons identified by his department rather than being proactive about forwarding the information to the Center. Senior officials at Fort Lewis did not know of any other central repository for such information. In our opinion, with the frequent turnover of personnel in the brigades and in some installation functions, it would be valuable to have all IBCT lessons learned available in a central repository. Successful formation of the first IBCT is critical to the Army's transformation plan because it will begin to fill a near-term gap in military capability and test new concepts that would be integrated into the future Objective Force. Although Army officials are pleased with the progress made thus far, concerns remain about whether all capabilities envisioned for the brigade will be achieved in time for the IBCT’s May 2003 certification milestone. Concerns include, notably, the unavailability of the mobile gun system, which provides a key combat capability, and the likelihood that the IBCT will be unable to meet the 96-hour deployment goal due to insufficient quantities of aircraft. Because the IBCT could be deployed to their theaters, it is important that CINC war planners know as soon as possible what planned capabilities are likely to be missing when the brigade is certified as having achieved its initial operating capability. Similarly, logistics planners will need logistics data soon to enable them to plan how best to meet the support requirements of the IBCT if it is deployed to their theater. Certain challenges have also arisen in forming the first IBCT at Fort Lewis. These challenges include concerns about retaining skilled personnel in the brigade, the ability of IBCT soldiers to sustain their skills on digital systems, and the need for and cost of facility improvements to support the formation of this brigade and, potentially, subsequent brigades. Taking actions now to address these and other challenges faced by the Fort Lewis facility could enhance the chances that subsequent IBCT formations will be accomplished smoothly. The BCC set up at Fort Lewis appears to have been an effective means of funneling the day-to-day challenges that have arisen in forming the IBCT to the appropriate Army entity for resolution and thus allowing brigade officials to focus on critical training and operational matters. Each installation will likely experience similar issues and benefit from a similar organization. The experiences of those forming the first IBCT and of Fort Lewis in hosting the IBCT provide examples of pitfalls and best practices that, if systematically recorded and made available in a central repository to others throughout the Army, could help the Army form subsequent brigades more efficiently. The Army’s Center for Lessons Learned is the designated focal point for lessons learned; however, the Center is neither collecting nor receiving all the lessons learned from forming the first IBCT. To ensure that regional CINCs have the information they need to plan for mitigating any risks associated with shortfalls in IBCT combat capability as well as logistical requirements, we recommend that the Secretary of Defense direct the Secretary of the Army to estimate the combat capabilities that will exist at the time the IBCTs are certified as deployable and set milestones for providing this information to CINC planners and provide CINC planners with relevant logistics information as soon as possible so that they can adequately plan how best to support the IBCTs. Because some mobility issues are beyond the Army’s purview and a long lead time could be necessary to rectify any identified shortfalls, we are further recommending that the Secretary of Defense obtain the Army’s specific IBCT mobility requirements to meet its goal for deploying a brigade anywhere in the world in 96 hours and determine how best to address any shortfalls. To assist subsequent installations where IBCTs will be formed in their planning, we recommend that the Secretary of Defense direct the Secretary of the Army to expedite development of a program to sustain personnel skills on digitized equipment so that it will be available for subsequent IBCTs, collect and analyze data on why soldiers leave the IBCTs and take appropriate action to reduce personnel turnover, estimate the extent and cost of facility improvements that will be needed at installations scheduled to accommodate the subsequent IBCTs to assist them in their planning, establish a BCC-type organization at subsequent IBCT locations to deal with day-to-day challenges, and provide a central collection point for IBCT lessons learned so as to make the information available to personnel throughout the Army. In commenting on a draft of this report, the Department of Defense generally agreed with the report’s findings and recommendations and outlined ongoing management actions to address the concerns noted in the report. In addition, we obtained technical comments from the Department on a draft of this report and incorporated them where appropriate. In responding to our recommendations that the Army estimate the combat capabilities and logistics requirements of the IBCT and provide the data to CINC planners, the Department acknowledged that since the first IBCT has not been fully fielded, there might be some planning information shortfalls that may inhibit CINC war planning. However, the Department noted that the Army, through the CINC Requirements Task Force, has provided a successful forum to address CINC concerns and derive solutions. We acknowledge that the CINC Requirements Task Force meetings provide a valuable communication tool. Nevertheless, during our fieldwork, CINC operational and logistics planners, who have been represented at these meetings, expressed concerns about not yet receiving specifics regarding the combat capabilities of the IBCT and its logistics requirements. As noted in our report, the planners emphasized that it was important to have these data to adequately integrate the IBCTs into their plans. Moreover, if certain planned capabilities would not be in place when the first IBCTs become deployable, the planners would need to know this. Accordingly, we do not believe that the CINCs’ participation in the Requirements Task Force can substitute for being directly provided data on planned combat capabilities and logistics requirements, as we recommended. Providing information as soon as possible to the CINCs would enable operational planners to begin their risk mitigation process in developing their contingency and operational plans. Regarding Army mobility requirements for the IBCTs, the Department stated that the Army would continue to define the mobility requirements to meet the goals for IBCT deployment. We recognize that prioritization and allocation of lift assets is an operational challenge to be faced by the CINCs and acknowledge that timely allocation of strategic and tactical mobility is needed for the IBCTs to meet planned operational capabilities. However, because the Army does not control mobility allocations, we believe that our recommendation is appropriately directed to the Secretary of Defense, who is in a better position to assess how best to mitigate any projected shortfalls. With respect to our recommendation that the Army expedite development of a program to sustain personnel skills on digitized equipment that will be available for subsequent IBCTs, the Department said that its ability to accelerate digitized training at the proponent schools was limited due to the equipment delivery schedules. Our recommendation, however, was directed at accelerating development of a sustainment training program for future use at the IBCT locations rather than the proponent schools, as noted in our report. During our review, Army officials expressed concerns that the individual soldiers’ digitization skills would quickly erode without a continuing focused regimen of training. Therefore, we continue to believe that the Army needs to expedite developing such a program and implement it as a part of each IBCT’s training program. In responding to our recommendation regarding IBCT reassignments, the Department said that the Army is carefully managing IBCT personnel reassignments pointing to the IBCT personnel stabilization policy that the Army instituted. Although this policy is intended to limit personnel turnover in the IBCT, the fact remains that IBCT soldiers are re-enlisting to leave the IBCT at a higher rate than other units in I Corps. We believe that collecting information on the reasons why IBCT soldiers are leaving at this higher rate would help Army officials identify actions that they might take to encourage re-enlistments in the IBCT. We also believe that this recommendation is especially important in that continuity is critical to achieving training objectives. In responding to our recommendation concerning facility requirements at subsequent IBCT locations, the Department stated that the Army routinely conducts estimates as part of the annual budgetary process. The Department said that the Army now has a draft transformation template for Army installations that will provide facility requirements to support IBCT stationing, training, and sustainment. The draft template is designed to provide installation planners a starting point to determine their installation peculiar requirements to support an IBCT. With regard to establishing a BCC-like organization at future IBCT sites, the Department stated that the Army has identified certain functions, processes, and support capabilities required to transform a unit into an IBCT. The Department noted that each IBCT location will have different levels of internal staff capability to execute transformation and that the Army will tailor, on a case-by-case basis, the resources required to fill the shortfalls at each location. We did not intend to dictate the size nor organizational structure for the BCC-like organization we recommended. We agree that as the Army learns about fielding IBCTs, requirements will differ from location to location and the Army should tailor whatever organization it sets up to fit the situational needs. In response to our recommendation regarding establishing a central collection point for IBCT lessons learned, the Department acknowledged that some lessons learned have not been disseminated throughout the Army nor sent to the Army’s Center for Lessons Learned. It said that the Army is planning to establish a central repository and procedures to inform the Army about past and future lessons learned from the Army’s transformation as we recommended. Appendix II contains the full text of the Department’s comments. To identify and gain an understanding of the anticipated capabilities of the IBCT, we discussed planned IBCT capabilities with Army officials at Fort Lewis, Washington; I Corps; the Brigade Coordination Cell; 3rd Brigade/2nd Infantry Division; and officials at the Armor and Infantry Schools and the Combined Arms Center at Fort Leavenworth, Kansas. We also obtained and reviewed various briefing documents, the IBCT Organizational and Operational Concept, the Center for Lesson Learned newsletter, test and evaluation reports, and the IBCT’s modified table of organization and equipment. To determine whether the CINCs believe the IBCTs’ planned combat capabilities will meet their requirements, we received briefings and discussed IBCT capabilities with commanders and staff at the U.S. Pacific Command and U.S. Army, Pacific, Honolulu, Hawaii; U.S. Forces Korea and 8th U.S. Army, Seoul, Korea; U.S. European Command, Stuttgart, Germany; and U.S. Army Europe, Heidelberg, Germany; and U.S. Central Command, MacDill Air Force Base, Florida. We reviewed documents that the Army developed concerning its respective areas of responsibility and planning. To identify challenges in forming the IBCTs, we concentrated our efforts on the first brigade being formed at Fort Lewis since the second brigade is in its early stages of formation. We attended weekly transformation update meetings at Fort Lewis from April 2001 through January 2002 to gain a sense of the challenges being faced. We interviewed the Commanding General and Deputy Commanding General for I Corps and Fort Lewis, the Deputy Commanding General for Training and Readiness, the Deputy Commanding General for Transformation (TRADOC) at Fort Lewis, their staffs, representatives from the Brigade Coordination Cell, the IBCT Commander and his battalion commanders, and the Army Materiel Command’s Director of Transformation Support on the extent of issues and challenges that had arisen in forming the first IBCT. In addition, to gain the perspective of the Army’s schools for training the IBCTs, we interviewed Army representatives from the U.S. Army Infantry Center, Fort Benning, Georgia; the U.S. Army Armor Center, Fort Knox, Kentucky; and the Combined Arms Center, Fort Leavenworth, Kansas. We obtained and reviewed IBCT training doctrine and manuals and discussed the IBCTs with senior Army officials and their staff to understand IBCT training issues. Based on the results from the Army’s weekly IBCT meetings and our interviews and analysis of documentation, we were able to discuss issues regarding potential challenges in the core areas of manning, equipping, training, supporting, and deploying the initial IBCT. To determine if the Army had an effective means for capturing lessons learned that may be applied to subsequent brigade formations, we interviewed I Corps and Fort Lewis representatives and the BCC historian; received briefings and interviewed representatives from the Center for Army Lessons Learned, Fort Leavenworth, Kansas; and attended the Information Exchange Conference held at Fort Lewis. We obtained reports published by the Center for Army Lessons Learned and the Army’s Test and Evaluation Command with regards to fielding the IBCTs at Fort Lewis. In addition, we acquired the current history files from the I Corps and Fort Lewis historian as well as the regulations for recording the Army’s history and lessons learned. As a result, we identified the Army’s process to capture lessons learned that may be applied to subsequent IBCT formations. Our review was performed from April 2001 to March 2002 in accordance with generally accepted government audit standards. We are sending copies of this report to the Secretary of Defense and the Director, Office of Management and Budget. We will also make copies available to appropriate congressional committees and to other interested parties on request. In addition, the report will be available at no cost on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-5140. Major contributors to this report were Reginald L. Furr, Jr.; Beverly G. Burke; Timothy A. Burke; Kevin Handley, M. Jane Hunt; Tim R. Schindler; Pat L. Seaton; and Leo B. Sullivan. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
In 1999, the Army announced its plans to transform its forces during the next 30 years to enable them to deploy more rapidly and operate more effectively during all types of military conflicts, from small-scale contingencies to major wars. The Army's goal is to be able to deploy a brigade anywhere in the world within 96 hours, a division within 120 hours, and five divisions within 30 days. The first step is to form and equip six interim brigade combat teams by 2008. Created to fill a gap in military capability, the teams are intended to be a lethal and survivable deterrent force that can be rapidly deployed around the world. The commanders in chief envision different uses for the teams according to the unique requirements of their respective regions. However, they generally agree that the teams should provide them with a broader choice of capabilities to meet their operational needs. The Army faces many challenges in assembling its first team. For example, some planned combat capabilities will not be present when the team is certified for deployment next year. In addition, the interim armored vehicle delivery schedule has compressed the time available for training. Army officials believe that the organization at Fort Lewis that was created to help assemble the brigades has been effective in dealing with day-to-day challenges. The Army is chronicling lessons learned in forming the teams, but this information is not readily available in a central source. As a result, the Army may be unaware of some best practices or may repeat mistakes in forming later teams.
Farming is inherently risky because farmers are exposed to both production and price risks. Farm production levels can vary significantly from year to year, primarily because farmers operate at the mercy of nature and frequently are subjected to weather-related and other natural disasters. Farm operators can also experience wide swings in the prices they receive for the commodities they grow, depending on total domestic and international production and demand. Over the years, the federal government has played an active role in helping to mitigate the effects of risk on farm income. On the production side, the government has subsidized the federal multiple-peril crop insurance program, allowing covered farmers to receive an indemnity payment when production falls below a certain level. To help mitigate price risk, the government administered price and income support programs for farmers of major field crops such as wheat, feed grains, cotton, and rice. However, the Federal Agriculture Improvement and Reform Act of 1996, commonly known as the 1996 farm bill, terminated the previous income support programs and replaced them with fixed but declining 7-year annual payments. Because these payments are not tied to market prices, farmers now have to take greater responsibility for managing their risk. To help farmers manage their risk, the U.S. Department of Agriculture (USDA), has introduced a new risk management tool, revenue insurance. Unlike the traditional multiple-peril crop insurance program, which insures against losses in the level of crop production, revenue insurance plans insure against losses in revenue. The plans protect the farmer from the effects of either declines in crop prices or declines in crop yields. The guarantees are based on market prices and on the historical yields associated with the insured acreage. As it does for traditional crop insurance, USDA shares in the cost of these plans by (1) subsidizing the premiums farmers pay, (2) paying private insurance companies to sell the insurance and process claims, and (3) paying a large portion of the plans’ underwriting losses (the difference between premiums and claims). Since the 1930s, federally subsidized multiple-peril crop insurance has been a principal means of managing the risk associated with crop losses. The Federal Crop Insurance Corporation (FCIC) administers the crop insurance program. Over time, this program has grown from covering a few crops and areas to covering most crops and areas. In addition, the Congress has periodically appropriated funds for disaster assistance to farmers when farming areas have suffered widespread crop losses because of weather conditions, such as drought or flooding. Between 1980 and 1998, USDA expanded the availability of crop insurance from 30 to 67 crops and from about one-half of the nation’s counties to virtually all areas of the country. Participation, measured in terms of the percent of eligible acres insured, rose from about 10 percent in 1980 to about 40 percent in the early 1990s. Under the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994, the Congress required farmers wishing to participate in other USDA farm programs to purchase a minimum amount of crop insurance. This requirement helped increase participation to over 70 percent of eligible acres. As the crop insurance program was expanded, federal costs (in constant 1997 dollars) averaged over $1.1 billion annually during the 1990s. As shown in table 1.1, the government’s costs for crop insurance totaled about $8.9 billion from 1990 through 1997. Several types of government costs are associated with the traditional crop insurance program. For every dollar of premium established, the government pays an average of 40 cents and the farmer pays 60 cents. The government’s portion of the premiums totaled $3.9 billion from 1990 through 1997. In addition, for every dollar of premium, the government pays the participating insurance companies another 27 cents for the administrative costs of selling and servicing the policies. These administrative expense reimbursements to the private insurance companies totaled $2.8 billion from 1990 through 1997. Furthermore, the government paid a portion of program losses (the difference between premiums and claims). Over the years, the established premiums have not been sufficient to pay the claims on the policies. Under the 1994 reform act, USDA is required to achieve a loss ratio of 1.10—that is, for every dollar in premiums taken in, the claims paid would be expected to average no more than $1.10. For 1981 through 1996, the claims paid have averaged $1.26 per $1 of premium, but the increases in premium rates in recent years by the Risk Management Agency are now expected to lower the loss ratio to about 1.10. Under the government’s standard reinsurance agreements with the companies, the companies share a limited portion of any program losses, but the government absorbs the vast majority of them, totaling $1.4 billion over the period. Finally, the government paid $747 million for FCIC’s own operating costs. In 1993, we reported the high costs associated with crop insurance through the years, and we pointed out that the insurability problems faced by the program hindered its actuarial soundness. Unlike insurers in other insurance industries, such as property and casualty, crop insurers cannot minimize their risk of loss by pooling participants with different levels of risk in their insurance program. In these other industries, the losses for one insured are independent of the losses for another insured. For the agriculture sector, however, losses are not generally independent of each other. For example, weather conditions, such as widespread drought, can cause production losses for many of the farmers in the same insurance pool. Furthermore, as we pointed out in the 1993 report, the crop insurance program is subject to conditions known as adverse selection and moral hazard. Because FCIC does not have sufficient farm-level information to differentiate among farmers’ risks, it may charge similar premiums to both high-risk and low-risk farmers. Consequently, high-risk farmers are more likely to find premiums attractive and therefore participate in the program in greater numbers than do low-risk farmers—a situation referred to as adverse selection. The report also noted that FCIC lacks sufficient information about individual farmers to detect moral hazard—when an insured farmer’s actions increase the chance for or the extent of loss. For example, when insurance payments seem to offer a better financial return than marketing a partial crop, a farmer may reduce inputs, such as fertilizer or pesticides, thereby increasing the risk of a production loss. The federal government also used income and price support programs in an effort to protect farmers’ incomes. Prior to 1996, USDA administered programs known as deficiency payment programs for several major crops—wheat, feed grains, cotton, and rice. These programs were designed to protect farmers’ incomes against declines in prices through a complicated array of pricing mechanisms. In return for participating in these programs, farmers agreed to limits on the number of acres they placed into production. Unlike the deficiency payment programs, which were not reauthorized by the 1996 farm bill, a number of price support programs, such as the marketing loan program, are still in place. Marketing loan programs are designed, among other things, to help farmers in periods of severely low prices. Under the 1996 farm act, farmers’ were encouraged to produce in response to market forces alone, rather than to the expectation of federal payments. As part of this new direction in policy, the 1996 act replaced the previous income support programs with “production flexibility contracts”—agreements between the federal government and participating farmers that provide for fixed but declining 7-year annual payments through 2002. These annual payments are not tied to market prices. Farmers who signed these agreements are not restricted to the type or amount of any crop they plant. USDA estimates that the production flexibility contracts will cost a total of $35.6 billion over the 7-year period. Many farmers also use crop insurance in combination with nongovernmental strategies to manage the risk to their income resulting from price fluctuations. A common strategy is forward contracting. With this technique, farmers contract to sell the crop, well before it is actually harvested, and thus are able to establish a pre-harvest selling price and guarantee an outlet for the crop. Additionally, some farmers use hedging—a process whereby the farmer directly uses the commodity futures markets to establish a pre-harvest price for the crop. The farmers using these techniques to manage their price risk generally continue to use traditional multiple-peril crop insurance to manage the risk of crop loss. As an alternative to buying crop insurance and separately forward contracting or hedging, three new government-supported revenue insurance plans—Crop Revenue Coverage, Revenue Assurance, and Income Protection—provide farmers with a single policy that protects against both production and price risk. Crop Revenue Coverage and Revenue Assurance were developed by private insurance companies that requested and received federal reinsurance for the plans, whereas FCIC developed Income Protection as a pilot project under the terms of the 1994 crop insurance reform act, which called for a risk protection plan based on the cost of production. Income Protection and Revenue Assurance are similar in that each plan pays indemnities when the income from crop production is less than the revenue guaranteed at planting. Crop Revenue Coverage adds an additional dimension that allows the farmer to receive a larger payment if market prices have increased in the intervening period. For all three plans, market prices are tied to the futures prices on the commodity exchanges, such as the Chicago Board of Trade. Premiums for Crop Revenue Coverage are established as surcharges to the traditional multiple-peril crop insurance rates, whereas Income Protection and Revenue Assurance use methods to establish new rates that are independent of the traditional rate. USDA shares in the cost of these new plans in a manner similar to the method used to support traditional multiple-peril crop insurance. First, just as with traditional multiple-peril crop insurance, USDA subsidizes the premiums farmers pay. The subsidy, which averages 40 percent of premiums for multiple-peril crop insurance, is limited, in the case of the new revenue plans, to the same dollar amount that would apply to the comparable multiple-peril insurance policy. Second, just as with traditional multiple-peril crop insurance, USDA pays private insurance companies a reimbursement for administrative expenses to sell the revenue insurance policies and process claims. This administrative reimbursement is a preestablished percentage of the premiums paid by the farmers. In 1998, USDA will pay the companies 27 percent of premiums to sell and service the multiple-peril, Income Protection, and Revenue Assurance policies. Because the premiums are significantly higher for Crop Revenue Coverage policies, USDA has limited the administrative payment on these policies to 23.25 percent of premiums. Finally, just as with multiple-peril crop insurance, USDA pays a large portion of any underwriting losses that may result if premiums are not high enough to pay all claims arising under the revenue policies. For 1998, USDA increased the portion of these losses that the companies must absorb, but the government continues to absorb most of the losses. Conversely, if underwriting gains occur—when premiums are higher than claims—the insurance companies and the federal government share in the gains. In light of the rapid expansion of, and the government’s significant financial participation in, the new crop revenue plans, the Ranking Minority Member of the House Committee on Agriculture asked us to (1) identify the differences between the three new revenue insurance plans, (2) report on the plans’ sales and claims experience, and (3) analyze the methodologies used to set the plans’ premium rates. We identified the differences in the various revenue insurance plans by reviewing USDA’s documentation for each plan as provided by the plans’ developers and comparing the plans’ features and protection levels. We confirmed our understanding of the various features of each plan by interviewing the Administrator of USDA’s Risk Management Agency at USDA’s headquarters in Washington, D.C., and the Senior Actuary at the Risk Management Agency’s main field office in Kansas City, Missouri; and by interviewing the developers of the revenue plans at Kansas State University, Iowa State University, and Montana State University. To determine the sales and claims experience of the three revenue insurance plans and traditional multiple-peril crop insurance, we obtained USDA’s computer files for crop years 1996 and 1997—the first years in which revenue insurance policies were sold. We identified national sales and claims information for each plan and analyzed this information, controlling for the differences in availability because of location, crop, and level of protection. We also examined the characteristics of Crop Revenue Coverage policies by measuring average acres insured, variability of year-to-year crop yields, and average yields per insured policy unit and comparing this information with the characteristics of multiple-peril crop insurance policies. Because Income Protection’s and Revenue Assurance’s sales were limited, we could not analyze their risk characteristics. To analyze the methods used to set premium rates and to identify uncertainties pertaining to premium rates, we reviewed academic literature on setting insurance rates and agricultural economics literature on crop revenue insurance and other issues such as the correlation between local crop yields and national prices. We also interviewed officials at USDA’s Economic Research Service, Office of the Chief Economist, and Risk Management Agency; the academic consultants on the plans at Kansas State University, Iowa State University, and Montana State University; and agricultural economists at several other universities who have performed research on crop and/or revenue insurance issues. In order to examine each revenue insurance plan, we interviewed the developers of the plans and reviewed the documentation they had provided to USDA as well as additional information they provided to us. We also evaluated each plan in light of our economic analysis, our discussions with the experts in these fields, and our review of the pertinent insurance and agricultural economics literature. We discussed our analysis with the developers of the plans and several independent reviewers. We conducted our review from July 1997 through March 1998 in accordance with generally accepted government auditing standards. We used the same files USDA uses to manage the crop insurance program. These files provide the most comprehensive information on farmers who have purchased crop revenue insurance. The three government-subsidized revenue insurance plans—Income Protection, Revenue Assurance, and Crop Revenue Coverage—differ in the revenue guarantees they provide to the farmer and in their relative cost to the government. Two of the plans, Income Protection and Revenue Assurance, set the revenue level that is to be protected at the time that crops are being planted, while the third, Crop Revenue Coverage, determines the protected revenue at either planting or at harvest, depending on when prevailing crop prices are higher. In terms of potential government costs, Crop Revenue Coverage is likely to cost the government significantly more than the other two plans over time. The three government-subsidized revenue insurance plans—Income Protection, Revenue Assurance, and Crop Revenue Coverage—establish a revenue target, or guarantee, for farmers. But they differ in how that guarantee is determined. For both Income Protection and Revenue Assurance, the farmer’s revenue guarantee is established when crops are planted. To determine that guarantee, the insurer multiplies the farmer’s expected production by a price established at planting. If the farmer’s revenue at harvest is below that expected preseason income, the farmer receives an insurance payment. Farmers whose revenue is at or above the guaranteed level do not receive a payment. Total revenue from the crop is the determining characteristic, not the level of production or the price alone. No payment would be made if a price decline is sufficiently offset by an increase in production or if a loss in production is offset by a sufficient increase in price. In contrast, the calculation of the amount of revenue guaranteed under Crop Revenue Coverage is more complicated. Crop Revenue Coverage guarantees a minimum revenue at planting that is determined by multiplying the prevailing futures market price at planting by the farmer’s historical production per acre. At harvest, the revenue guarantee is revisited, and the final guarantee is determined by multiplying the farmer’s historical production by the price at planting or at harvest, whichever price is higher. If the price has increased in the period between planting and harvest, the farmer receives a payment for any lost production at the higher harvest price. This upward price protection feature assures the farmer that any lost production will be replaced at the prevailing market price, thus facilitating forward contracting by the farmer. If, however, the harvest price is lower, the original guarantee is in force. (For additional information on how the revenue payment is calculated, see app. I.) The revenue insurance plans also differ in several operational features. Although futures prices form the basis for the payments under all three plans, the plans adjust these prices differently to account for variations between local and national prices. In addition, the methods used to establish which parcels of land will be covered for insurance purposes vary from plan to plan. Finally, only one of the plans is available across the country. The insurance payment for all three plans is determined by subtracting the revenue realized at harvest from the revenue guarantee. The starting point for determining revenue is the futures prices for a particular commodity on its commodity exchange. However, each plan adjusts those prices somewhat differently. The differences center around how the national price prevailing on a commodity exchange is adjusted for local conditions. Generally, prices in local markets are a few cents per bushel less than the national price on the board of trade. These local differences are generally greater in the areas more distant from major market centers, and the differences decline nearer to the market centers. Income Protection and Crop Revenue Coverage do not adjust for this factor, while Revenue Assurance makes a county-by-county adjustment. Table 2.1 shows the revenue guarantee features of the three plans. As the table shows, Income Protection makes no adjustment for the difference in prices that occur from county to county. Instead, the plan uses one national price for all policies in all counties. For corn, the price used to determine the revenue guarantee for all policies is the Chicago Board of Trade’s average corn futures price in February for the December contract. Similarly, Income Protection determines actual revenue for all policyholders by multiplying the farmer’s actual production by the Chicago Board of Trade’s corn futures average price in November for the December contract. In contrast, Revenue Assurance determines the revenue guarantee for each farmer using the Chicago Board of Trade’s February prices for the December corn contract, adjusted by a county-specific factor. Revenue Assurance establishes this adjustment on the basis of the historical relationship of local harvest prices in each county to the Chicago Board of Trade’s prices in the harvest month. To determine the value of the harvested crop, Revenue Assurance departs from the Chicago Board of Trade’s prices. Instead, it uses a price USDA establishes for other purposes in each county—referred to as the posted county price. During 1996 and 1997, Crop Revenue Coverage calculated each farmer’s revenue guarantee using the higher of (1) 95 percent of the average corn futures price on the Chicago Board of Trade in February for the December contract, or (2) 95 percent of the average corn price on the Chicago Board of Trade in November for the December contract. To determine the crop’s harvested value, Crop Revenue Coverage used 95 percent of the average corn price on the Chicago Board of Trade in November for the December contract. For 1998, farmers may choose to insure at either 95 or 100 percent of the futures price. The three risk management plans differ in the choices they offer the farmer to combine the various individual fields on their farm or farms for insurance purposes. These differences in the way farmers can insure the land they farm are important because revenue payments differ depending on the actual configuration. Four land configuration arrangements are available to farmers: (1) whole farm (combining coverage on all fields for all combinations of covered crops in the county in which the farmer has a share in the crops produced); (2) enterprise unit (combining each of the fields in which the farmer owns or has a share of the crop produced in the county, regardless of ownership arrangement); (3) basic unit (combining each of the fields of a crop under a single type of ownership arrangement); and (4) optional unit (essentially, insuring on a field-by-field basis). In general, the more a farmer’s land is consolidated, the less likely it is that the farmer will have a loss large enough to trigger an insurance payment. This is because a farmer’s production, for insurance purposes, is averaged across all the insured fields. Income Protection is available only on the basis of the enterprise unit. In contrast, for Revenue Assurance, farmers can choose to configure their farm with any type of units. Initially, Revenue Assurance establishes the premium rate for those choosing the basic unit. If the farmer wants to further divide the basic unit into optional units, the policy imposes a surcharge. However, if the farmer elects to consolidate coverage on the basis of an enterprise unit, the policy offers a discount from the initial basic unit rate. The policy provides an additional discount for the farmer who chooses whole farm coverage. Finally, Crop Revenue Coverage allowed basic and optional coverage in 1996 and 1997 and received approval from FCIC to add enterprise coverage for 1998. In 1997, 61 percent of Crop Revenue Coverage policies were based on optional units. As table 2.2 shows, the three plans are not available for all crops in all areas, although Crop Revenue Coverage is rapidly expanding to cover more crops in more states. All three plans are relatively new, which accounts for their limited availability in some areas of the nation. Crop Revenue Coverage, since its introduction in 1996, has rapidly expanded to all major crops in the major growing areas. Income Protection, developed by USDA in 1996, has been expanded slightly but is only available in scattered counties covering certain crops around the nation. Finally, Revenue Assurance, which became available in 1997, only covers corn and soybeans in Iowa. To illustrate the differences between traditional multiple-peril crop insurance and the three revenue insurance plans, we examined the premiums and insurance claim payments for a hypothetical Iowa corn farmer. For this illustration, we assumed the farmer purchased crop insurance at the 75-percent coverage level and established a record of normal production of 120 bushels per acre. We also used 1997 prices under various combinations of 30-percent price and production increases and declines. Of course, payment amounts at other combinations of production and prices would be different. As shown in table 2.3, premiums for Crop Revenue Coverage would be higher than for traditional multiple-peril crop insurance because Crop Revenue Coverage provides additional benefits. In contrast, for this example, premiums for Income Protection and Revenue Assurance would be lower than for traditional crop insurance. The table also shows that, in the event of normal production combined with 30-percent decline in prices, no payment would be due under the traditional multiple-peril crop insurance policy, but each of the revenue policies would provide payments. In the event of 30-percent declines in both production and price, each type of policy would pay, but the amounts paid would vary. However, in the event of a 30-percent decline in production combined with a 30-percent increase in price, the traditional policy and Crop Revenue Coverage would result in claims payments, but no claim payment would result under the terms of Income Protection and Revenue Assurance. Appendix I describes how the premiums and payments shown in table 2.3 were calculated. Crop Revenue Coverage is likely to be more costly to the government than the other insurance plans because of its higher reimbursements for administrative expenses to participating companies and because of potentially higher total underwriting losses (the excess of claims payments over total premiums). Furthermore, the plan’s promise to base the revenue guarantee on the price at planting or at harvest, whichever is higher, exposes the government to higher claims payments in years when widespread crop losses are coupled with rapidly increased prices. The government pays insurance companies a smaller fee per premium dollar to sell and service Crop Revenue Coverage than the other revenue plans or multiple-peril crop insurance. However, the total cost of administrative reimbursements for Crop Revenue Coverage is greater because the reimbursement rate is not low enough to offset the much higher premiums under this plan. That is, the government reimburses the companies at a rate of 23.25 cents for every dollar of premium sold, which is less than the rate of 27 cents per dollar of premium for the other plans; but because Crop Revenue Coverage’s premiums per acre average about 30-percent higher than the premiums for the other crop insurance plans, the effective cost to the government is actually higher for Crop Revenue Coverage than for the other insurance plans. For example, for insurance sales that generate $1 million of premiums, the government’s costs for reimbursing administrative expenses under Income Protection and Revenue Assurance is $270,000 (27 percent in administrative costs multiplied by $1 million in premiums). In contrast, the government’s equivalent cost for Crop Revenue Coverage for the same number of insured acres is $302,250 (23.25 percent in administrative costs multiplied by the higher premiums—$1.3 million), an increase of $32,250. Assuming Crop Revenue Coverage premiums of $300 million for crop year 1998, the government’s administrative reimbursement cost for this plan will be over $7 million higher than for either of the other two revenue insurance plans or traditional multiple-peril crop insurance. The participating companies receive higher reimbursements but also incur some additional expenses, including higher processing and training costs, and higher loss adjustment costs in the years when Crop Revenue Coverage makes payments while multiple-peril crop insurance does not. Crop Revenue Coverage’s higher volume of premiums also results in higher costs to the government than multiple-peril crop insurance, given equal levels of underwriting losses. Under current law, both multiple-peril crop insurance and the new revenue insurance plans are expected to operate over time with an underwriting loss of $1.10 paid in claims for every $1 in premium. In other words, the government expects, over time, to pay claims averaging $1.10 for every $1 in premium. Therefore, by applying the same loss rate to Crop Revenue Coverage’s larger volume of premiums, the absolute dollar value of the loss will be higher. In addition to generating potentially higher total losses, the claims experience with Crop Revenue Coverage is likely to have a more exaggerated, or magnified, impact during any given year because of the plan’s unique upward price protection feature. This feature, which gives the farmer an increased revenue guarantee when market prices rise between the time the farmer plants and harvests the crop, significantly raises the government’s exposure to large claims payments in years when widespread crop losses are coupled with rapidly increased prices. The two other plans reduce the government’s exposure during such years. For example, in 1996, adverse weather conditions destroyed winter wheat in sections of the Great Plains and Midwest, contributing to an increase in prices from $3.65 a bushel at planting to $5.47 per bushel at harvest. If Crop Revenue Coverage had been available for winter wheat in 1996, FCIC would have had to pay an additional 43 percent, or $172 million more, in claims than it actually paid under traditional multiple-peril crop insurance. As shown in table 2.4, assuming Crop Revenue Coverage had been available and protected 50 percent of the acres insured in 1996, FCIC would have paid an estimated $569.8 million in wheat claims instead of the $397.7 million it actually paid. Alternatively, because the price increase that occurred in 1996 more than offset the average production loss, the provisions of the Income Protection or Revenue Assurance plans would have resulted in claims payments that were much less than those actually paid under multiple-peril crop insurance—an estimated $198.8 million. This example involves one crop. Widespread droughts often affect a number of crops, in which case the government’s financial exposure under Crop Revenue Coverage would increase more. However, part of the potential underwriting loss for Crop Revenue Coverage is reduced by the increased premiums in effect. Additionally, under reinsurance agreements, underwriting losses are borne in part by the participating companies, but the majority of the losses are paid by USDA. Furthermore, during years with favorable claims experience, Crop Revenue Coverage would generate higher underwriting gains than either multiple-peril crop insurance or the other two revenue insurance plans. In their first 2 years, the crop revenue insurance plans, especially Crop Revenue Coverage, have already achieved a significant share of the crop insurance market, accounting for about one-third of crop insurance premiums in the areas where they were offered. In the initial years, the new plans’ claims payment experience was similar to the experience of traditional multiple-peril crop insurance. With respect to the characteristics of the farming operations covered by the plans, Crop Revenue Coverage policies written in 1997 insured higher acreage levels and were associated with operations having lower production variability, over time, than traditional multiple-peril crop insurance. Therefore, the Crop Revenue Coverage policies, on average, appear to be less risky. This lower level of risk may have occurred because initial marketing efforts were targeted to larger farmers in the most consistently productive farm areas. As such, the differences in risk may diminish over time as the marketing expands into the general farming community. Crop revenue insurance plans, as a group, had strong sales, claiming a significant portion of crop insurance sales in 1997, the first year that all three plans were available. Crop Revenue Coverage, the most widely available of the three revenue insurance plans, took away a considerable amount of business from multiple-peril crop insurance—obtaining a 32-percent share of the market—in the areas where it was sold. In contrast, neither Revenue Assurance nor Income Protection were able to attract many purchasers—obtaining 6-percent and 3-percent shares, respectively—in the areas where they were sold. By its second year, Crop Revenue Coverage had captured a significant portion of the crop insurance business from traditional multiple-peril crop insurance in areas where both were available. As shown in table 3.1, Crop Revenue Coverage in 1997 accounted for 32 percent of the premiums, 29 percent of the acres insured, and 25 percent of the policies in the areas where it was sold. According to a senior Risk Management Agency official, this plan has attracted many purchasers in part because the premiums for the plan, on a cost-per-acre basis, were relatively low in the areas where the plan was introduced, and in these locations, the premiums appeared reasonable for the potential additional benefits they provide. In the counties where Income Protection is available for purchase, few farmers have opted to buy it. As shown in table 3.2, Income Protection obtained from 3 to 5 percent of the total crop insurance market, depending on the measure used. In the 41 counties where both Income Protection and Crop Revenue Coverage were offered in 1997, the sales achieved by Income Protection appear to come at the expense of Crop Revenue Coverage rather than multiple-peril crop insurance. In the one state where it was sold—Iowa—Revenue Assurance met with only moderate success. For crop year 1997, Iowa was the only state where farmers were able to choose between traditional multiple-peril crop insurance and all three revenue insurance plans. As shown in table 3.3., in terms of total premiums, Revenue Assurance achieved a 6-percent share of the Iowa corn insurance market and an 8-percent share of the Iowa soybean insurance market. In contrast, Crop Revenue Coverage achieved higher market penetration in Iowa—52 percent of the corn and 49 percent of the soybean market—than it did nationally. Income Protection—available in six counties in Iowa—achieved less than 1 percent of the sales for both corn and soybeans. All types of crop insurance had relatively low levels of claims in 1997. The crop insurance industry discusses the extent of losses in terms of the claims paid per premium dollar collected. For 1981 through 1996, traditional multiple-peril crop insurance paid an average of $1.26 in claims per $1 of premium (including the government’s subsidy). However, in 1997, because of the relatively favorable growing conditions in the nation, the crop insurance program had a much lower level of claims—$0.49 per $1 of premium. Moreover, the revenue insurance plans had lower levels of claims payments than did multiple-peril crop insurance—ranging from $0.06 to $0.36 per $1 of premium, as shown in table 3.4. According to the Risk Management Agency, the lower claims experience could have occurred for several reasons, such as a concentration of sales in lower-risk areas, stable crop prices, and/or a combination of these and other factors. The generally low level of claims experienced for the revenue insurance plans also may be attributed in part to the fact that the new insurance products were generally purchased by larger, slightly lower-risk farmers. See appendix II for detailed sales and claims data by state and insurance plan. Crop Revenue Coverage policies written in 1997 insured a higher number of acres and were associated with operations having lower production variability over time, thus appearing to be less risky, on average, than traditional multiple-peril crop insurance. Crop insurance research has shown that policies with these characteristics tend, on average, to have a lower incidence of claims payments. The differences between Crop Revenue Coverage and traditional multiple-peril crop insurance may have occurred because initial marketing efforts were targeted to larger farmers in the most consistently productive farm areas. As such, the differences may diminish over time as marketing expands into the general farming community. While the two plans differ in these respects, we found that they were similar in other respects, such as the average yield per acre. Because Income Protection’s and Revenue Assurance’s sales were limited, we could not analyze their risk characteristics. In 1997, Crop Revenue Coverage insured more acres, on average, than did traditional multiple-peril crop insurance. Specifically, the policies for traditional multiple-peril crop insurance, insured, on average, about 132 acres per policy in 1997, while Crop Revenue Coverage policies insured about 160 acres, or 21 percent more. According to a senior Risk Management Agency official, these differences may have occurred because crop insurance agents’ initial marketing efforts may have targeted larger farming operations, and this difference may decline over time as marketing expands into the general farming community. In 1997, Crop Revenue Coverage policies insured farming operations with slightly less variation in their production history over time, on average, than traditional crop insurance. Specifically, these policies had an average variation of 22 percent, compared with an average variation of 25 percent for traditional multiple-peril crop insurance. These percentages represent the average deviation of each insured unit’s actual yield per acre each year from the unit’s average yield over the period for which production history was provided. As we noted in 1993, farmers having a high variation in their production are more likely to experience a loss than farmers having low variation in their production and, thus, are riskier to insure. With a variation in production that is 3 percentage points lower, the holders of Crop Revenue Coverage policies are less likely to experience a loss. While Crop Revenue Coverage and traditional multiple-peril crop insurance differ in some respects, they are similar in others. For example, multiple-peril crop insurance and Crop Revenue Coverage policies generally were based on similar years of production history provided by the policyholders—an average of 7.3 years for insured units under multiple-peril insurance compared with 7.4 years under Crop Revenue Coverage. Similarly, multiple-peril crop insurance policies and Crop Revenue Coverage policies had levels of insured production per acre that exceeded the average yield for all farmers in the particular county by about the same percentage. The multiple-peril crop insurance policy units insured yields per bushel that were 115 percent of the average yield per bushel for all farmers in the particular county, while Crop Revenue Coverage policy units had insured yields that were 116 percent of their county’s average yield per bushel. We identified shortcomings in the way premium rates are established for each of the revenue insurance plans. While favorable weather and stable crop prices generated a very favorable claims experience over the first 2 years that the plans were available to farmers, these shortcomings raise questions about whether the rates established for each plan are actuarially sound over the long term and are appropriate to the risk each farmer presents. Furthermore, while the plans were initially approved on a limited basis only, FCIC approved the substantial expansion of one of these plans—Crop Revenue Coverage—before the initial results of claims experience were available. Since this initial expansion, FCIC has made and proposed a number of changes to provide safeguards in its process for approving new plans. According to insurance principles, insurance companies need information on likely future losses in order to establish premium rates that would cover those losses. For crop revenue insurance, reliably projecting future losses requires an accurate depiction of the revenues that insured farmers are likely to generate. Premium rates can then be established on the basis of the probability that actual revenues will diverge from insured revenues in a given year. Such a depiction of revenues for farmers as a whole is commonly referred to as a revenue distribution. Data on individual farmers’ actual revenues are not available. However, a reasonable approximation of these revenues can be obtained by multiplying a farmer’s yields by crop prices. In this way, a simulated revenue distribution can be developed that provides a reasonable basis for establishing premium rates. Crop Revenue Coverage is problematic because it uses neither a revenue distribution nor another appropriate statistical technique that takes into account the relationship between prices and yields as a basis for estimating premiums and future claims payments. Instead, rate setting for this plan begins with the premium rate structure for traditional multiple-peril crop insurance and increases rates by introducing an additional charge to cover the risk of a price increase and another charge to cover the risk of revenue that is less than the guarantee. By not recognizing the interrelationship between prices and yields, the premium adjustments may not be actuarially sound over the long term or appropriate to the risk each farmer presents. Thus, we are not able to determine whether premium rates for this plan are too high or too low. In contrast, the rate-setting approaches for Revenue Assurance and Income Protection are much less problematic because they are based on revenue distributions, although they use different approaches to develop these distributions. We also identified several shortcomings in these two plans. However, these shortcomings are less serious than Crop Revenue Coverage’s lack of a revenue distribution or other statistical technique that takes into account the interrelationship between prices and yields. Revenue Assurance has shortcomings in two respects. First, in constructing its revenue distribution, the plan uses only 10 years of yield data (1985-94), which is not a sufficient historical record to capture the fluctuations in yield over time. Furthermore, 3 of the 10 years had abnormal yields: 1988 and 1993 had abnormally low yields, and 1994 had abnormally high yields. Second, Revenue Assurance assumes that the interrelationship between crop prices and yields is the same in all production areas. This is not the case. That is, the link between yield declines and price increases or yield increases and price declines is much stronger in some areas than others. By using the same estimate of the interrelationship for all areas, the resulting estimate of claims may be too high in some areas and too low in others. As a result, there is no assurance that the plan’s premiums are appropriate for all farmers and will actually cover all claims over time. With respect to Income Protection, the plan’s major shortcoming is that it bases its estimate of future price increases or decreases on the way that prices moved in the past. This method of developing estimates could be a problem because past price movements occurred in the context of past government programs, and in the absence of the government programs, the price movements may be considerably more pronounced, according to some analysts. Instead, price volatility estimates based on commodity futures prices are more appropriate for forecasting expected claims payments because they reflect current expectations of the extent to which prices may increase or decrease between planting and harvest. The methods used in the three plans to set premium rates are described and evaluated in greater detail in appendixes III, IV, and V. Crop Revenue Coverage was initially approved for sale in December 1995 for two crops—corn and soybeans—in two states—Iowa and Nebraska. Given FCIC’s lack of experience with revenue insurance and the uncertainty surrounding the soundness of the premiums charged, restricting the initial sales to a limited area was prudent. However, in July 1996, 7 months after it initially approved Crop Revenue Coverage and, before it knew the claims experience in these areas, FCIC’s board of directors approved the expansion of Crop Revenue Coverage to include wheat farmers in Kansas, Michigan, Nebraska, South Dakota, Texas, Washington State, and 19 counties in Montana. This expansion occurred under the board’s authority to approve privately developed insurance products. The board required that the companies add a 10-percent surcharge, referred to as a catastrophic load factor, to the rates initially established. This surcharge was not based on the initial experiences in the original states but was a judgmental adjustment added in response to the concerns about the adequacy of premium rates expressed by USDA and university economists. In January 1997, the board, acting again within its authority, expanded Crop Revenue Coverage to essentially cover all major crops in the major states where the crops are grown. It was clear at this time that Crop Revenue Coverage was more popular than had been initially expected. National producer organizations expressed strong interest in expanding the program to additional geographical areas and to additional crops. The board expanded Crop Revenue Coverage, although it was cautioned by USDA officials, USDA’s Office of General Counsel, and USDA’s Office of Inspector General about problems with the continued expansion of the plan. Specifically, the Administrator for the Risk Management Agency informed the board that no underwriting experience was available to evaluate Crop Revenue Coverage. He also noted that the amount of liability under the plan can increase between planting and harvest, thereby increasing crop insurance liability in a loss situation and potentially having a major impact on FCIC’s overall loss ratio. However, the Administrator also pointed out that an expanded program would have the advantage of giving farmers in most states an additional risk management tool. Furthermore, USDA’s Office of General Counsel advised the board to reject expansion because widespread expansion might expose FCIC to excessive risk in the absence of any data that could be used to determine whether the rates were actuarially appropriate. Finally, USDA’s Office of Inspector General cautioned FCIC several times that expansion was occurring without adequate controls in place. Income Protection and Revenue Assurance have not been significantly expanded since their introduction. To avoid problems with the introduction of future revenue insurance plans, USDA is developing new regulations that would require any new plan to undergo a preapproval review before it could be sold nationwide that is much more rigorous than the review undertaken for Crop Revenue Coverage, Revenue Assurance, and Income Protection. The draft regulations require that a company proposing a new plan include a detailed description of the rating method used, simulations of the performance of the premiums under various scenarios, and the results of a review by a peer review panel or accredited actuary. The regulations also require that the requester provide detailed information concerning plans for future expansion of the plan. Additionally, FCIC has made changes to the gain- and loss-sharing portions of the reinsurance arrangements with the companies that better protect the government’s interest with respect to the revenue insurance plans. For 1998, FCIC decreased the companies’ share of underwriting gains and increased the companies’ share of underwriting losses. With the government’s phasing out of income support for farmers, risk management tools are increasingly important. Of the available risk management tools, farmers are increasingly turning to the revenue insurance plans. Accordingly, it is important that the premium structures for the revenue policies be set in a fashion that will be appropriate to the risk each farmer presents and will protect the government from undue exposure to loss. Despite very positive early underwriting experiences, our analysis indicates that the premium structures for the three revenue insurance plans have weaknesses in their underlying assumptions and methods that could result in their being actuarially unsound. Crop Revenue Coverage, the plan that has become the most popular, is the most problematic. While we identified some problems in the methods used to set premiums for all three plans, we found the most serious deficiencies in Crop Revenue Coverage, which did not base its rates on a revenue distribution or other appropriate statistical technique that takes into account the interrelationship between crop prices and yields. Apart from its rate-setting deficiencies, Crop Revenue Coverage is also more costly to the government than the other plans. Because Crop Revenue Coverage’s premiums are higher, the federal government pays higher reimbursement costs for administrative expenses and has higher underwriting losses over time. To be more certain that the revenue insurance plans are actuarially sound over the long term and are appropriate to the risk each farmer presents, we recommend that the Secretary of Agriculture direct the Administrator of the Risk Management Agency to address the shortcomings in the methods used to set premiums. Specifically, with respect to all three plans, the Secretary should direct the Risk Management Agency to reevaluate the methods and data used to set premium rates to ensure that each is based on the most actuarially sound foundation. With respect to Crop Revenue Coverage, the Risk Management Agency should base premium rates on a revenue distribution or another appropriate statistical technique that recognizes the interrelationship between farm-level yields and expected prices. In commenting on a draft of this report, USDA expressed concern with our recommendation that it reevaluate the data and methods used to set premiums for the three revenue insurance plans. Specifically, USDA noted that while it does not necessarily endorse or feel fully comfortable with all aspects of the rating models, the agency does not believe our report provides evidence that there are “fatal flaws” in the plans’ rating methods. Therefore, the Department believes that the plans’ continued use of these rating methods is appropriate. We disagree. While we do not state in this report, nor do we believe, that the plans contain “fatal flaws,” we believe that the shortcomings we identified in all three revenue insurance plans are serious enough to warrant a reevaluation of the methods and data used to set premium rates to ensure that each plan is based on the most actuarially sound foundation. This is especially the case for Crop Revenue Coverage, which does not base its rate structure upon a distribution of likely revenues from farming operations. Without a distribution of likely revenues or other appropriate statistical technique, the plan does not take into account the interrelationship between crop prices and yields, and many crop insurance experts agree that such an interrelationship must be considered. Thus, we stand by our recommendation that the Risk Management Agency needs to address the shortcomings in the rating methods. USDA also provided clarifying comments to the report that have been incorporated where appropriate. USDA’s comments and our responses are presented in detail in appendix VI. This appendix explains the methodology we used to calculate the premiums and payments for a hypothetical Iowa farmer under multiple-peril crop insurance, Crop Revenue Coverage, Income Protection, and Revenue Assurance. We assumed that the farmer would plant nonirrigated corn, have a production history of 120 bushels per acre, and would choose to buy insurance at the 75-percent coverage level. This farmer is located in Adair County, Iowa—a county in which all three revenue insurance policies were available in 1997. The prices used in the example are those that were established by the Federal Crop Insurance Corporation (FCIC) for each plan for 1997. The examples of claims payments assume various combinations of 30-percent increases and decreases in prices and production levels. We chose these percentages to illustrate the operation of the various insurance plans. Other combinations of changes in prices and/or production levels would produce different results. To purchase traditional multiple-peril crop insurance, our hypothetical Iowa corn farmer chose basic unit coverage and insured at 100 percent of the crop price available for 1997 ($2.45). Given our assumptions, the farmer would have paid $11.20 per acre for traditional multiple-peril crop insurance. For Crop Revenue Coverage, our hypothetical farmer also selected basic unit coverage. The projected crop price for Crop Revenue Coverage in 1997 was $2.59 per bushel for corn. On the basis of our assumptions, we determined that the farmer choosing Crop Revenue Coverage would have paid $16.50 per acre in 1997. For Revenue Assurance, with a projected price of $2.38 per bushel for corn, this same farmer would have paid $8.40 per acre. The Income Protection price we used for our estimate was $2.73 per bushel for corn, and we determined that the farmer would have paid premiums of $5.90 per acre in 1997. In the event of normal production combined with a 30-percent decline in price, no payment would be due under the traditional multiple-peril policy, but each of the revenue insurance policies would provide payments. No payment would be due under the traditional multiple-peril policy because, by definition, it only pays when the farmer’s production falls below the guarantee, which in the case of the 75-percent coverage level, would be 75 percent of 120 bushels, or 90 bushels. If the farmer purchased Crop Revenue Coverage, the revenue guarantee would be the 75-percent coverage level multiplied by the normal production of 120 bushels, and the resulting production multiplied by the higher of the projected price ($2.59 per bushel in 1997) or the harvest price ($1.81 if the price declined 30 percent). The guarantee under these conditions would be $233.10 (.75 x 120 x $2.59 = $233.10). The guarantee is then compared with the value of the farmer’s harvested crop, determined by multiplying the actual production by the harvest price (120 x $1.81 = $217.20). Thus, in the case of normal production combined with a 30-percent decline in price, the farmer who obtained Crop Revenue Coverage would receive a payment of $15.90 per acre ($233.10 - $217.20 = $15.90). If, instead, the farmer had purchased an Income Protection policy, the revenue guarantee would be determined by multiplying the coverage level (.75) by the normal production (120 bushels), and multiplying the resulting production by the projected price ($2.73 per bushel in 1997). The per-acre guarantee under these conditions would be $245.70 (.75 x 120 x $2.73 = $245.70). The policy bases the payment on the difference between this guarantee and the $229.20 per-acre value of the farmer’s crop—determined by multiplying the actual production (120 bushels per acre) by the harvest price ($1.91 if the price declined 30 percent). Thus, in the case of normal production combined with a 30-percent decline in the price, the per-acre payment for the farmer who purchased Income Protection would be $16.50 ($245.70 - $229.20 = $16.50). If the farmer had purchased a Revenue Assurance policy instead, the revenue guarantee would be determined by multiplying the coverage level (.75) by the normal production (120 bushels), and multiplying the resulting production by the projected county price. The price varies by county, depending on the extent to which the price in the county has tended to be higher or lower than the price on the national commodity market. For Adair County in 1997 for corn, the projected county price was $2.38 ($2.73 per bushel national price in 1997 minus $0.35 county adjustment = $2.38). The per-acre guarantee under these conditions would be $214.20 (.75 x 120 x $2.38 = $214.20). The policy bases the payment on the difference between this guarantee and the per-acre value of the farmer’s crop ($200.40)—determined by multiplying the actual production (120 bushels per acre) by the harvest price ($1.67 if the price declined 30 percent). Thus, in the case of normal production combined with a 30-percent decline in price, the per-acre payment for the farmer who purchased Revenue Assurance would be $13.80 ($214.20 - $200.40 = $13.80). In the event of both a 30-percent decline in production and a 30-percent decline in price, each type of policy would pay, but the amounts paid would vary. The traditional policy pays on the basis of a decline in production, while the revenue policies pay on the basis of a decline in gross revenue. The traditional multiple-peril crop insurance policy pays when the farmer’s production falls below the guarantee, which in the case of the 75-percent coverage level, would be 75 percent of 120 bushels, or 90 bushels. If the farmer purchasing this policy experienced a 30-percent reduction in production, production would average 84 bushels per acre (70 percent of 120). Thus, the farmer would be paid for a reduction of 6 bushels per acre (90 - 84 = 6). The actual price prevailing at harvest does not affect the payment under the traditional policy. Assuming the farmer had selected the 100-percent price option, the payment would be made at $2.45 per bushel (the price election announced by the U.S. Department of Agriculture prior to the 1997 crop insurance sales period), although national prices had declined to $1.72 in this example. Thus, in the case of a 30-percent reduction in production combined with a 30-percent decline in price, the farmer who obtained traditional multiple-peril crop insurance would receive a payment of $14.70 per acre (6 bushels x $2.45 = $14.70). If the same farmer had purchased a Crop Revenue Coverage policy instead, the revenue guarantee would be the 75-percent coverage level multiplied by the normal production of 120 bushels, and the resulting production multiplied by the higher of the projected price ($2.59 per bushel in 1997) or the harvest price ($1.81 if prices declined 30 percent). The guarantee under these conditions would be $233.10 (.75 x 120 x $2.59 = $233.10). The guarantee is then compared with the value of the farmer’s harvested crop, determined by multiplying the actual production by the harvest price (84 bushels x $1.81 = $152.04). Thus, in the case of a 30-percent reduction in production combined with a 30-percent decline in price, the farmer who obtained Crop Revenue Coverage would receive a payment of $81.06 per acre ($233.10 - $152.04 = $81.06). If the same farmer had purchased an Income Protection policy instead, the revenue guarantee would be determined by multiplying the coverage level (.75) by the normal production (120 bushels), and multiplying the resulting production by the projected price ($2.73 per bushel in 1997). The per-acre guarantee under these conditions would be $245.70 (.75 x 120 x $2.73 = $245.70). The policy bases the payment on the difference between this guarantee and the per-acre value of the farmer’s crop ($160.44)—determined by multiplying the actual production (84 bushels per acre) by the harvest price ($1.91 if the price declined 30 percent). Thus, in the case of 30-percent reduction in production combined with a 30-percent decline in price, the per-acre payment for the farmer who purchased Income Protection would be $85.26 ($245.70 - $160.44 = $85.26). If the same farmer had purchased a Revenue Assurance policy instead, the revenue guarantee would be determined by multiplying the coverage level (.75) by the normal production (120 bushels), and multiplying the resulting production by the projected county price. The price varies by county, depending on the extent to which prices in the county have tended to be higher or lower than the prices on the national commodity market. For Adair County in 1997 for corn, the projected county price was $2.38 ($2.73 per bushel national price in 1997 minus $0.35 county adjustment = $2.38). The per-acre guarantee under these conditions would be $214.20 (.75 x 120 x $2.38 = $214.20). The policy bases the payment on the difference between this guarantee and the $140.28 per-acre value of the farmer’s crop—determined by multiplying the actual production (84 bushels per acre) by the harvest price ($1.67 if prices declined 30 percent). Thus, in the case of a 30-percent reduction in production combined with a 30-percent decline in price, the per-acre payment for the farmer who purchased Revenue Assurance would be $73.92 ($214.20 - $140.28 = $73.92). In the event of a decline in production combined with an increase in price, the traditional policy and the Crop Revenue Coverage policy would result in payments, but no payment would result under the terms of the Income Protection and Revenue Assurance policies. Because the harvest price has no effect on the payment under the traditional crop insurance policy, the claim payment for a farmer with a 30-percent decline in production in combination with a 30-percent increase in price would be the same as the payment under constant or decreasing prices ($14.70 as calculated in the previous section). If the same farmer had purchased Crop Revenue Coverage instead, the revenue guarantee would be the 75-percent coverage level multiplied by the normal production of 120 bushels, and the resulting production would be multiplied by the higher of the projected price ($2.59 per bushel in 1997) or the harvest price ($3.37 if the price increased 30 percent). The guarantee under these conditions would be $303.30 (.75 x 120 x $3.37 = $303.30). The guarantee is then compared with the value of the farmer’s harvested crop, determined by multiplying the actual production by the harvest price (84 bushels x $3.37 = $283.08). Thus, in the case of a 30-percent reduction in production combined with a 30-percent increase in price, the farmer who obtained Crop Revenue Coverage would receive a per-acre payment of $20.22 ($303.30 - $283.08 = $20.22). If the same farmer had purchased an Income Protection policy instead, no payment would be due because the value of the harvested crop would exceed the revenue guarantee. The revenue guarantee would be determined by multiplying the coverage level (.75) by the normal production (120 bushels), and multiplying the resulting production by the projected price ($2.73 per bushel in 1997). The per-acre guarantee under these conditions would be $245.70 (.75 x 120 x $2.73 = $245.70). No payment would be due because this guarantee is less than the per-acre value of the farmer’s crop ($298.20)—determined by multiplying the actual production (84 bushels per acre) by the harvest price ($3.55 if prices increased 30 percent). Thus, in the case of a 30-percent reduction in production combined with a 30-percent increase in price, no insurance payment would be made to the farmer who purchased Income Protection ($245.70 - $298.20 = –$52.50—thus, no payment is due). Similarly, if the same farmer had instead purchased a Revenue Assurance policy, no insurance payment would be due because the value of the harvested crop would exceed the revenue guarantee. The revenue guarantee would be determined by multiplying the coverage level (.75) by the normal production (120 bushels), and multiplying the resulting production by the projected county price. The price will vary by county, depending on the extent to which the price in the county has tended to be higher or lower than the prices on the national commodity market. For Adair County in 1997 for corn, the projected county price was $2.38 ($2.73 per bushel national price in 1997 minus $0.35 county adjustment = $2.38). The per-acre guarantee under these conditions would be $214.20 (.75 x 120 x $2.38 = $214.20). No payment would be required because the guarantee is less than the $259.56 per-acre value of the farmer’s crop—determined by multiplying the actual production (84 bushels per acre) by the harvest price ($3.09 if prices increased 30 percent). Thus, in the case of a 30-percent reduction in production combined with a 30-percent increase in price, no insurance payment would be made to the farmer who purchased Revenue Assurance ($214.20 - $259.56 = –$45.36—thus, no payment is due). The tables in this appendix show crop insurance results for 1997 for traditional multiple-peril crop insurance (MPCI), Insurance Protection (IP), Crop Revenue Coverage (CRC), and Revenue Assurance (RA). Table II.1 shows various sales and claims payments experience, by state and by insurance plan. Table II.2 combines all states to show sales and claims payments experience by insurance plan only. Table II.1: Crop Insurance Experience by State and by Insurance Plan, 1997 Policies in force, acres insured, and dollars in thousands (continued) Policies in force, acres insured, and dollars in thousands (continued) MPCI includes the group risk plan. Total excludes special plans that cover peanuts, tobacco, fruit trees, and various minor crops. MPCI includes the group risk plan. Total excludes special plans that cover peanuts, tobacco, fruit trees, and various minor crops. The Crop Revenue Coverage plan was developed by a private insurance company in the early 1990s. The plan is designed to guarantee farmers (1) a certain level of income and (2) the replacement value of the difference between insured yields and actual yields if actual yields are below the insured level. Crop Revenue Coverage’s premiums are based on three components: “yield risk,” “upward price risk,” and “revenue risk.” Premiums calculated for each of the components are added together to generate the total premium for each policy. This appendix defines each component and explains how it is developed. The first section describes the calculation of the yield risk component of the premium, which is based on the multiple-peril crop insurance program. The second section describes the calculation of the upward price risk component, which refers to the expected payout by the insurer as a result of a yield loss and a price increase between planting (insurance sales period) and harvest. The third section shows how the revenue risk component is developed, which is the risk that, if prices are lower at harvest than at planting, actual revenue is less than guaranteed revenue. The fourth section demonstrates how the three components are summed to form a base premium. In these calculations, yields and prices are treated as if they are independent of one another. Finally, we present our analysis of the method used to set premiums for Crop Revenue Coverage. For Crop Revenue Coverage, yield risk relates to situations in which the actual yield is lower than the insured yield and the price at harvest is not higher than the price guaranteed at planting. Through yield risk coverage, the insured farmer is eligible for a payment equivalent to the difference between the insured yield and the actual yield, multiplied by the planting price. The portion of the premium related to yield risk is derived from the premium rate schedules for multiple-peril crop insurance. The yield risk accounts for two-thirds of the expected payout by the insurer. The yield risk premium is the product of the multiple-peril crop insurance base rate, the farmer’s actual production history (APH), the coverage level, and the planting price or: Yield risk premium = MPCI Base Rate x APH x Coverage Level x Planting Price Equation 1 estimates the portion of the premium that relates to yield risk: (1) where PR is the calculated premium, R is the multiple-peril crop insurance base rate, Y is the insured yield, P is the planting price, and EL is expected yield loss. The premium is not exactly equivalent to the product of the planting price and expected losses because expected losses for each farm can only be approximated. Multiple-peril crop insurance base rates are derived from historical losses relative to historical premiums for various yield and coverage levels. The relevant market price is equal to 95 percent of the average closing price of the harvest period’s futures contract price during the planting period. The expected yield loss equals the multiple-peril crop insurance base rate multiplied by the yield guarantee to the farmer. Upward price risk is a component developed especially for Crop Revenue Coverage. It refers to the risk of a higher price at harvest than at planting when the actual yield is lower than the insured yield. Under the upward price risk component, the insured farmer is eligible for a payment equal to the difference between the insured and actual yields multiplied by the harvest price. The total upward price risk equals the product of the multiple-peril crop insurance base rate, the farmer’s APH, the coverage level, and the upward price factor (which is the product of the upward price rate times the maximum liability for that crop) or: Upward Price Risk = MPCI Base Rate x APH x Coverage Level x Upward Price Factor Equations 2 through 10 are used to estimate the upward price factor, that is, the risk of prices increasing between planting and harvest, when the farmer has a loss in yield. Equation 2 estimates a premium rate for a yield loss by dividing expected crop losses by the yield guarantee: (2) where R is the insurance premium rate, EL is expected loss, and Y is the yield guarantee. Equation 3 integrates the price distribution above the planting price in order to estimate expected loss from an upward price change: (3) where EL is the expected loss in dollars, Pis the planting price, P is the actual price, and f (P) is the probability density function for price changes. The function is constrained by the maximum price difference reimbursable for each insured crop. In order to facilitate the estimation of expected losses, Crop Revenue Coverage uses the polynomial function for the integration of a normally distributed probability distribution from Abramowitz and Stegun(Equations 4 and 5) along with a procedure developed by Botts and Boles(Equations 6 through 9). The Botts and Boles procedure estimates the mean of a truncated normal distribution, one in which a portion is cut off and isolated for analysis. The truncated distribution is bounded by the maximum compensated price change and the mean of price changes for the entire normal distribution. This is the portion of the price distribution that reflects prices above the planting price. Equation 4 estimates the probability of a loss (or in this case the probability of an upward price change) using the polynomial function for integration of a normal distribution: (4) where P is the probability that the insurer would be required to pay insured farmers under the “upward price risk” provision of Crop Revenue Coverage. Moreover, a, a, and a are constants from Abramowitz and Stegun. Variables, Z and T are estimated in Equations 5 and 6. Equation 5 estimates the value of T, which measures the area under a normal curve: (5) where b is a parameter of the price distribution from Abramowitz and Stegun. Equation 6 estimates Z, which is the height (measured parallel to the Y axis) of the ordinate of the truncated distribution: (6) where EP is the expected or mean price change of the entire distribution, P is the planting price, and SD is the standard deviation of price changes for the entire normal distribution. Equation 7 estimates M, the mean of the truncated normal distribution, in this case the mean of the distribution of upward price changes: (7) where EP is the mean price for the entire normal distribution (untruncated), Z is as defined above, P is the probability of a price change above the guaranteed price, and SD is the standard deviation of price changes for the entire distribution. Equation 8 estimates expected losses: (8) where M is the mean of price changes for a truncated normal distribution, P is the probability of a loss and P is the planting price. Substituting Equation 7 into Equation 8 gives Equation 9, which expresses expected loss per bushel: (9) Equation 10 (as in Equation 2 for a yield loss) expresses the premium rate per bushel for an upward price change as the result of dividing expected losses per bushel by the planting price: (10) The premium rate calculated above, however, must be adjusted to reflect Crop Revenue Coverage regulations, which require payment for price increases under conditions of actual yield losses only. In order to account for this feature of the program, a conditional probability, that is, the probability of a price increase given a yield loss must be calculated. In order to calculate a premium rate for this factor (R adjusted for the probability of a price increase, given a yield loss), the unadjusted R (as in Equation 10) is multiplied by the multiple-peril crop insurance base rate for yield loss. Revenue risk refers to the risk of harvest revenue that is lower than the revenue guaranteed at planting. Guaranteed revenue is the product of the insured yield and the planting price. Under the revenue risk component, as long as harvest revenue is lower than guaranteed revenue, the insured farmer is eligible for a payment. When the harvest price is lower than the planting price, harvest revenue can be lower than guaranteed revenue when yield is at or above the insured yield or when yield is lower than the insured level. The revenue risk factor is the product of the revenue rate, the farmer’s actual production history, the coverage level, and the downward price factor (which is the downward price rate times the maximum liability for that crop) or: Revenue Risk = Revenue Rate x APH x Coverage Level x Downward Price Factor. In order to calculate the revenue risk, Crop Revenue Coverage estimates two factors: the downward price factor and the revenue rate. The downward price factor is calculated using the same method as the upward price factor, but here the risk evaluated is that prices will be lower at harvest than at planting. The revenue rate is derived from the area under the yield curve below the yield guarantee, given a price decline. The revenue rate must cover the risk, when price declines, of harvest revenue that is less than the planting revenue guarantee. The revenue rate does not cover the risk of a yield loss, because the yield risk factor compensates for that by paying the insured farmer the product of the yield loss and the planting price. However, the revenue rate must cover the risk of the guaranteed revenue being higher than the sum of market revenue and payments under the yield component. For a given price decline, the largest such payout under the revenue rate would occur at the yield guarantee, when no payments are made under the yield risk component. Alternatively, the greatest payment under the yield risk component would occur at zero production, when no payment is made under the revenue risk component. The Crop Revenue Coverage base rate is calculated in six steps. First, a mean yield and standard deviation are calculated by county, by crop, and by farming practice using data on APH and multiple-peril crop insurance base rates. Second, using these data, a yield curve is generated. Third, using the polynomial function for the integration of a normally distributed density function (Abramowitz and Stegun), the area under the curve below the yield guarantee is calculated to obtain the probability of collecting indemnities, given a price decline. Fourth, the expected yield loss is calculated using the Botts and Boles method. Fifth, this expected loss is subtracted from the yield guarantee because this part of the yield loss is already covered by the multiple-peril crop insurance or “yield risk” portion of the Crop Revenue Coverage premium. Sixth, the expected yield is divided by the mean yield and multiplied by the probability of collecting indemnities in any given year, given a price decline. In steps 1 and 2 above, the yield curves are generated by using the mean and standard deviations of yield that are derived from the Risk Management Agency’s published APH and base rate data. In the third step, Equations 11, 12, and 13 calculate the area underneath the yield curve between 0 and the yield guarantee, or the probability, P, of an indemnity being paid, given a price decline: (11) (12) (13) where a is the mean yield of the distribution, y is the guaranteed yield, and SD is the standard deviation of yields. In the fourth step, Equation 14, the expected yield loss, EL, is calculated: (14) In the fifth step, Equation 15, the expected yield, EY, is calculated by subtracting the expected loss, EL, from the yield guarantee:(15) In the sixth step, Equation 16, the revenue rate is obtained by multiplying the ratio of the expected yield to the mean yield by the probability, P, of the farmer collecting an indemnity from a price decline: (16) There is no provision in this rate for the possibility that yields could be above the mean while prices are declining, triggering an indemnity. The calculation of the total Crop Revenue Coverage base premium, before subsidy, is the sum of the following three products: Yield risk premium = MPCI Base Rate x APH x Coverage Level x Upward price risk premium = MPCI Base Rate x APH x Coverage Level x Upward Price Factor, and Revenue risk premium = Revenue Rate x APH x Coverage Level x Downward Price Factor. Crop Revenue Coverage differs significantly in its rate-setting method from the two other insurance plans. Unlike the methods used for Income Protection and Revenue Assurance, the method used to establish premiums for Crop Revenue Coverage is not based on a revenue distribution or another appropriate statistical technique. Instead, Crop Revenue Coverage establishes rates by adding together yield, upward price, and revenue risk factors. The yield risk component is based on rates established under traditional multiple-peril crop insurance. The upward price risk component is used to estimate losses to the insurer in the case of a price increase, given a yield loss. The revenue risk component is used to estimate losses to the insurer of harvest revenue that is lower than the revenue guaranteed in the planting period. Using this additive procedure, the private insurance company developer assumed that price and yield are independent of each other and derived them separately. However, the price-yield correlation is needed to help establish premium rates that are not too high to discourage participation or too low to cover losses. This correlation would be greatest in concentrated production areas, such as the midwestern cornbelt, where the price-yield correlation is highest, and decline as the distance from these areas increases because the price-yield correlation decreases the further production for corn is from the central area. Analysts disagree about the impact of omitting the correlation between price and yield. Some have suggested that omitting this correlation may not be as serious a shortcoming as might be expected. Although the price-yield relationship is an important component of revenue distributions, especially for major crop production areas, Crop Revenue Coverage premiums, on average, may still be appropriate to cover losses over time, according to these analysts. This is because, although the rate for price increases (upward price risk) may be too low and the rate for price decreases (revenue rate) may be too high, they may offset each other. However, other analysts point out that there is no evidence that the failure to incorporate the price-yield correlation has a neutral effect on premiums. They say that government outlays in years of very low yields could be extensive because the plan understates the probability of a yield loss when prices increase. In response to the Iowa Farm Bureau’s proposal that federal deficiency payments be replaced with a federally subsidized insurance product, Revenue Assurance was developed to provide a payment to insured farmers when farm revenues fall below a predetermined trigger level. The payment is the difference between the trigger, or guaranteed, revenue and the actual revenue. In order to develop premiums that will likely cover future losses over time, insurers need to accurately depict a revenue distribution, or use another appropriate statistical technique, to reflect receipts at the farm level. Three primary steps are essential to determining the revenue distribution—developing the price distribution, developing the yield distribution, and estimating the price-yield correlation. The first section of this appendix describes how the price distribution, using futures prices adjusted for local differentials, is calculated for Revenue Assurance. The second section describes how the yield distribution is estimated. Certain parameters are imposed on the price distribution and on the yield distribution. The third section shows how the price and yield distributions are combined to form a revenue distribution that incorporates a price-yield relationship. The fourth section shows how expected losses are used to calculate premiums. Finally, we present our analysis of the methodology used to set premium rates for Revenue Assurance. Current prices, which have the advantage of reflecting current market conditions, are used for developing price distributions for Revenue Assurance. The premiums are based on the prices set during planting for futures prices during the harvest period, adjusted for local conditions. Following an analysis of the responsiveness of cash prices to changes in futures prices, the difference between futures and cash prices for each county was found to be constant over time. Equation 1 uses current futures price and price volatility to estimate a lognormal price distribution, F(P): (1) and z are the parameters of the lognormal price distribution. The current price used is the average of the planting period price of the harvest period futures contract. The price volatility used is calculated by applying the Black options pricing formula to the price of the planting period put option on the harvest period futures contract. Revenue Assurance assumes that crop yields follow a beta distribution. The beta distribution exhibits three major characteristics: First, it can exhibit negative or positive skewness; second, it has finite minimum and maximum values; and third, it can take on a wide variety of shapes. Equation 2 describes the beta distribution of yields, y, as: (2) where p, q, ymax ,and ymin are the four parameters and G (p+q), G (p), and G (q) refer to the gamma function of (p+q), p, and q, respectively, which is directly related to the beta distribution. Equations 3 and 4 estimate the values of p and q using the method of moments technique:(3) (4) where m each county, ymax is the maximum, and ymin is the minimum yield. P is from equation 3. The mean yield, m is the mean of yield and s is the standard deviation of yield for , is derived from a discrete range of the farmers expected yields. The maximum and minimum yields determine the degree and direction of skewness and of kurtosis. Using the Johnson and Tenenbein approach, Revenue Assurance estimates a revenue distribution by joining the lognormal price distribution and the beta yield distribution. A continuous bivariate revenue distribution is constructed by taking random draws of variables from the specified marginal distributions for price and yield. The variables already reflect the dependence measure, r , Spearman’s rank correlation coefficient, to account for the yield-price correlation. The needed variables to form a revenue distribution, price and yield, x and y, are generated through the following procedure. Capital letters represent random variables and lower case letters represent drawn values of these random variables. In Equation 5 , A and B are assumed to have a common standard normal density function with mean 0 and standard deviation of 1. (5) Equation 6 defines r as a, the value of the drawn variable from a standard normal distribution: (6) Equation 7 defines s, the linear combination of the values of a and b weighted by c, which reflects the yield-price correlation, r : (7) where a and b are identically and independently distributed random variables with a common density function and c is a weight reflecting the relationship between the two random variables, in this case price and yield. Equations 8 and 9 define w and z, the cumulative density functions of R and S, respectively. (8) (9) where F (.) is the cumulative density function for a standard normal variate. Finally, Equations 10 and 11 result in the variables price, x, and yield, y: (10) (11) where FX(.) and FY(.) are the known marginal cumulative density functions for price and for yield, and FX-1(.) are the corresponding inverse-1(.) and FY functions. After the correlated price and yield observations are drawn from the inverse marginal distributions, they are multiplied together to generate thousands of revenue observations. In this way, a revenue distribution is generated. Revenue Assurance premiums are derived from an average of expected losses. The expected loss for a hypothetical policy is derived by taking the difference between the guaranteed revenue and market revenue as reflected in the revenue distribution developed above. If the guaranteed level is higher than the revenue realized, the difference is the amount of indemnity owed. Potential indemnities associated with each guaranteed revenue are totaled. The losses are averaged across all policies to develop premium rates. To develop premium rate tables for similar production levels, every permutation of a discrete range of prices, yields, and coverage levels corresponding to average expected losses are simulated. These data are used to estimate premium rates by developing a translog equation that links expected losses with (1) expected farm and county yield, (2) yield variability, (3) price volatility, (4) coverage levels, and (5) the cross-products and squares of these variables. Although the Revenue Assurance model has the advantage of being based on current prices and a revenue distribution that incorporates the price-yield interrelationship, assumptions about relevant distributions and application of a key statistical technique raise questions about the adequacy of the plan’s premium rates. The Revenue Assurance method uses prices that may be more appropriate than Income Protection’s or Crop Revenue Coverage’s for calculating future revenues. That is, Revenue Assurance uses prices at the pre-planting period for harvest period futures contracts as the expected prices and derives the variance of prices from current options contracts on the relevant futures contract. These current prices and variances are more likely to reflect future market conditions than historical prices because they reflect traders’ expectations of prices in the future. However, the developers use too few years of yield data to estimate yield variability. Furthermore, yields in 3 separate years during the period 1985 through 1994 reflect events that are likely to occur much less frequently than every 10 years. Exceedingly low yields were observed in 1988 and 1993, and very high yields were observed in 1994. By limiting the basis for yield analysis to the 1985-94 period, the model would forecast these unusual yields more frequently than historical yields would indicate. Revenue Assurance uses a parametric statistical method that requires that the underlying distribution function be normal or some other specified form. If properly applied, this method generates efficient estimates that have smaller variances than those of the nonparametric method. However, the assumptions Revenue Assurance makes about yield distributions may not reflect actual yield data at the farm level. According to several analysts, there is no consensus about the correct functional form of yield distributions. Furthermore, estimates of the yield distribution are very sensitive to the assumed minimum and maximum values for yield. In addition, the Johnson and Tenenbein statistical technique imposes a constraint that is not appropriate. Specifically, a constant value for the price-yield correlation for all farmers in all years, which is required for the proper application of the technique, does not reflect actual experience. In the areas further from the heaviest concentration of production, the interrelationship between prices and yields is weaker than in the heart of the production area. Furthermore, in catastrophic years, the correlation between prices and yield is usually stronger than the average value over time. Because it is not appropriate to assume a constant price-yield correlation, it is difficult to have confidence that rates based on such a revenue distribution would be actuarially sound over the long term and appropriate to the risk each farmer presents. In response to a mandate under the Crop Insurance Reform Act of 1994, USDA developed Income Protection, an insurance plan designed to guarantee a certain level of income from crop production. Premiums for Income Protection are based on revenue distributions that show expected losses and payouts at different levels of guaranteed income. Three primary steps occur in developing the Income Protection rating methodology—the construction of yield distributions, the construction of price distributions, and the construction and simulation of the revenue distributions on the basis of the results of the first two constructions. The first section of this appendix describes how the components of the simulated yield distributions are calculated using regional, county, and farm-level yield data. The second section describes how the components of the price distribution are calculated by estimating an equation relating prices to the yields already estimated. The third section shows how the price and yield observations developed from the distributions are combined to construct revenue distributions. No statistical restrictions are imposed on the yield, price, or revenue distributions. The fourth section shows how average indemnities and thus rates are calculated. Finally, we present our analysis of the methodology used to set premium rates for Income Protection. The yield distributions for Income Protection are derived from data on three major sources of yield variability—trends over time, regional events, and individual farm production characteristics. Trends over time are represented by 50 years of regional yield data. Yield data for years when actual yields were vastly different from expected yields are included and weighted relative to the 50 years of data used. Regional events are represented by regional yield data adjusted for differences in county yield. Regional data are also used to capture price-yield interactions, or correlations. For information on the yield on individual farms, APH records are used for farms for which actual yield data are available for 6 or more years. Additional yield data provided by farmers supplement historic records. The regional data are the acre-weighted averages of county yields provided by USDA’s National Agricultural Statistics Service (NASS) for all counties that the Federal Crop Insurance Corporation (FCIC) has specified as risk-rating regions. The county yields are NASS county yields per planted acre. The pooled farm data consist of the most recent APH data reported by farmers and recorded in FCIC’s files on yield history. For estimating rates, data are used from farms that report 6 or more years of actual yields. To determine the Income Protection premium for a farmer, the predicted yield for the farmer’s county is adjusted by the difference between the farmer’s yield as reflected in the yield data provided by the farmer and the county average yield. Equation 1 estimates regional yields: (1) where R is the regional yield, t is time, aR is the region’s yield intercept, g(t) is the region’s estimated yield trend over time, and eR is the regional residual yield variation. The same yield trend is imposed on all counties in a risk-rating region. The errors or remaining variability in yields, after the trend has been accounted for, are used to construct the revenue equation. Equation 2 shows the method used to test for heteroskedasticity, that is, whether the variability of regional yields has changed over time: (2) The results indicated that the variability had changed over time and a scaling process was applied to the errors to correct for the heteroskedasticity. Equation 3 shows one method used to correct for heteroskedasticity. Here, the predicted values of the absolute yield errors are used to scale the original yield errors from the regional equation to 1997 units. The estimated values of these errors make up the yield distribution from which observations are drawn to develop revenue functions: (3) After the regional trend is estimated, g(t), a county-specific intercept, aestimated to account for county-specific differences in productivity. The intercepts are calculated as the simple averages of the differences in yields between each county and the average yield for the region. All farms in a county are used to calculate yield variations if at least 6 years of yield data are provided by 50 or more farms in the county; if fewer than 50 farms provided yield data for at least 6 years, yields for all farms in the region are used. C is the error term. Equation 5 is used if the function of the yield trend is not linear, with the county intercept, a is the number of years in the data set and g(t) from the regional equation detrends the county data. Equation 6 is used to construct a county-adjusted regional yield series for each county in a risk-rating region to maintain a consistent rating process across regions: (6) C is the where Rcounty-specific intercept, g(t) is the regional trend function, and eregional residual as estimated above. The intercept, trend value, and error term are summed to construct the county-adjusted regional yield. In order to determine yield variability attributable to the farm yield only, it is necessary to isolate yield variability at the county-adjusted regional level. (These two sources of variation are reconstituted during the premium estimation process.) Isolating the variability in this manner allows the county-adjusted regional data set, which is longer than the farm data set, to be used to estimate the severity and frequency of large regional events. Equation 7 shows the construction of the yield variability attributable to the farm level only: (7) where dfarm in time t, yregional yield in time t. is the deviation from the county-adjusted regional yield for each is the farm yield, and REquation 8 shows the construction of the farm’s average yield variability attributable to the farm level only: (8) where d(bar) is the average deviation from the county-adjusted regional yield for each farm. The deviation is calculated by subtracting the average county-adjusted regional yield, RC (bar), from the average farm yield, y (bar): Equation 9 shows the remaining variability after accounting for variability at the farm and county-adjusted regional levels: (9) The variability, or statistical errors, remaining is expressed as a function of the difference of the farm’s deviation from its average yield and the county-adjusted regional’s deviation from its average yield for the same period of time. If a given county has 50 or more farms with 6 years or more of data, the residuals from the county’s farms are used. However, if there are fewer than 50 farms with 6 years or more years of data, residuals from all farms in the risk-rating region are used. For the major field crops, price distributions are based on monthly average prices from planting to harvest over a 37-year period. Prices for commodity futures at planting and harvest are used to develop price distributions for the major field crops. Monthly averages of the futures price contracts for the 1960-96 period are constructed for each insured crop. The planting period price, P, is defined as the average of a 30-day period ending 2 weeks before the crop insurance sign-up for that crop and location, while the harvest period price, Pmeasures the relationship between price and yield, RC is the county-adjusted regional yield, and RC (hat) is the forecasted county-adjusted regional yield for year t. The term inside the parenthesis adjusts for the lower number of price observations relative to yield observations in the calculation of revenue under Income Protection. This term is constructed in order to generate a zero mean set of proportional regional yield deviations for the subset of yield data used in the expression. The equation is also used to estimate the price-yield correlation and the remaining statistical errors, which were not accounted for by the variation in the county prices. The error term from this equation is used in a later step to obtain a consistent estimate of revenue. In order to construct the revenue distribution, errors from three estimated equations are drawn randomly: R from the yield trend Equation (1), from the remaining farm variability Equation (9), and from the price-yield equation (10). Equation 11 represents the construction of a simulated county-adjusted regional yield: (11) Equation 12 represents the construction of a simulated farm yield: (12) Equation 13 represents the construction of a simulated price realization: (13) Equation 14 represents the construction of a simulated revenue realization: (14) If REVS , actual revenue, is less than the guaranteed revenue, a payment or indemnity of the difference is assumed to be made and the amount recorded. The above process is repeated 10,000 times, and a running total of the payouts is recorded for each of the possible indemnity levels. The average indemnity (total indemnities divided by 10,000) is used as an estimate of the actuarially neutral premiums. Using this method, rates were developed for discrete combinations of farm and regional average yields for use in insurance rate tables. The premium rates offered in Income Protection are developed through a nonparametric statistical model that constructs a revenue distribution on the basis of actual price and yield data. This model does not make any assumptions about the shape of the actual revenue distribution or about price and yield distributions; such assumptions could bias the estimates of expected losses. As part of the rate-setting method, this model takes into account all of the variability in price and yield data, as well as changes in the price-yield correlation; therefore, it is not necessary to estimate these factors separately. The advantage of this approach is that the shape of the revenue distribution is generated by the actual crop data. Therefore, the error of incorrectly imposing a shape on the revenue distribution is avoided. However, if the underlying distribution is known, a nonparametric method may have an inherent disadvantage of producing less efficient estimates than a parametric method. While Income Protection appropriately relies on an integrated statistical model to estimate probable losses, it does not consider how revenue may change in response to the new farm policy. That is, Income Protection relies on historical crop prices to estimate rates, which, as previously discussed, may not reliably predict future crop prices. Developers of the Income Protection plan believe that because of the effect of previous farm programs, historical data sets may underestimate the variances in future farm prices. This would mean that premium rates would be too low to accommodate future price fluctuations and therefore future losses. In order to account for this effect, a 20-percent loading factor was added to the premium. The following are GAO’s comments on the U.S. Department of Agriculture’s letter dated April 6, 1998. 1. We disagree. While we do not state in this report, nor do we believe, that the plans contain “fatal flaws,” we believe that the shortcomings we identified in all three revenue insurance plans are serious enough to warrant a reevaluation of the methods and data used to set premium rates to ensure that each plan is based on the most actuarially sound foundation. In particular, as we reported, the rating method for Crop Revenue Coverage is especially problematic because it does not take into account the relationship between crop prices and yields. 2. Contrary to the agency’s assertion, we do not assume that the premium rates for revenue coverage (without replacement coverage) are always lower than the premium rates charged for yield coverage. We agree with the agency that the rates for these revenue plans can be higher than those for yield coverage when both yields and prices decline. For this reason, throughout the report, we say that a decline in yield is “often” accompanied by an increase in prices. 3. In using the term “actuarial soundness,” we mean that the premiums established for each plan are sufficient over the long term to cover the indemnities paid, and that individual premiums are appropriate to the risk each farmer presents. We have revised the report to clarify our use of this term. 4. We have removed the word “mechanical” from the report. 5. We disagree. Estimates of future price volatility based on historical prices and estimates of price volatility based on current market expectations are not equally appropriate. Crop Revenue Coverage and Income Protection base their estimates of future price increases or decreases on the way that prices moved in the past, when certain farm programs were in place that set a price floor. This situation has changed. Under current policy, when prices are tied to market conditions, we continue to believe that the market’s expectation of price volatility is the best barometer of intra-year price changes. 6. In the executive summary, we have modified the language to reflect the partially offsetting effects of Crop Revenue Coverage’s higher premiums. Our discussion in chapter 2 already reflected this point. Nevertheless, when crop prices are higher at harvest than at planting, claims payments for Crop Revenue Coverage will exceed those paid for multiple-peril crop insurance. 7. We have modified our report to reflect the fact that in recent years the agency has improved its expected loss ratio for traditional multiple-peril crop insurance to achieve the current legislatively mandated 1.10 loss ratio. 8. See comment 1. In addition, we believe that as shortcomings in the methods used to establish premium rates are identified, the Risk Management Agency should take action to correct the deficiencies to the extent possible. 9. We agree that the agency must continually evaluate all rate-making methodologies. However, when this evaluation reveals shortcomings, as we point out in this report, then evaluations should be translated into actions to ensure that each plan is based on the most actuarially sound foundation. 10. We have modified our report to reflect the agency’s authority to approve expansion of Crop Revenue Coverage. 11. The Risk Management Agency’s senior actuary informed us that the premium rates for Crop Revenue Coverage average about 30 percent higher than comparable premium rates for traditional multiple-peril crop insurance. Because administrative expense reimbursements are based on fixed percentage of premiums, higher premiums for Crop Revenue Coverage will result in higher administrative costs to the government. A judgment on whether the reimbursement is adequate to cover expenses was beyond the scope of our work. Robert C. Summers, Assistant Director Thomas M. Cook, Evaluator-in-Charge Barbara J. El Osta Donald L. Ficklin Mary C. Kenney Robert R. Seely, Jr. Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed various issues pertaining to the Department of Agriculture's new crop revenue insurance plans. GAO noted that: (1) the three government-subsidized revenue insurance plans differ in the revenue guarantees they provide to farmers and in their relative cost to the government; (2) two of the plans, Revenue Assurance and Income Protection, set the revenue level that is to be protected at the time that crops are being planted, while the third, Crop Revenue Coverage, determines the protected revenue at either planting or at harvest, depending on when crop prices are higher; (3) in terms of potential government costs, Crop Revenue Coverage is likely to cost the government significantly more than the other two plans because of its higher reimbursement for administrative expenses and because of potentially higher total underwriting losses; (4) furthermore, the plan's promise base the revenue guarantee on the price at planting or the price at harvest, whichever is higher, exposes the government to higher claims payments in the years when widespread crop losses are coupled with rapidly increasing prices; (5) in their first two years of availability to farmers, the crop revenue insurance plans, especially Crop Revenue Coverage, achieved a significant share of the crop insurance market, accounting for about one-third of the total crop insurance sales in the areas where they were offered; (6) in terms of the claims payments for 1997, all types of crop insurance experienced much lower than average levels of claims as a result of favorable growing conditions in most of the country; (7) morever, primarily because revenue insurance plans were often marketed in lower-risk areas, they experienced lower levels of claims payments than did multiple-peril crop insurance; (8) GAO identified shortcomings in each revenue insurance plan's approach to establishing premium rates; (9) Crop Revenue Coverage is especially problematic because its rate structure does not take into account the interrelationship between crop prices and yields--an essential component of actuarially sound rate settings; (10) while good weather and stable crop prices generated very favorable claims experience over the first 2 years of the plans' availability, GAO has doubts about whether the rates established for each plan are actuarially sound over the long term and are appropriate to the risk each farmer presents; and (11) furthermore, while the plans were initially approved on a limited basis only, the Federal Crop Insurance Corporation, acting within its authority, approved the substantial expansion of one of these plans--Crop Revenue Coverage--before initial results were available.
The Forest Service, an agency in the U.S. Department of Agriculture (USDA), manages 155 national forests covering about 192 million acres of land, or about 9 percent of the nation’s land surface, under the leadership of the Chief of the Forest Service, who reports to the Under Secretary of Agriculture for Natural Resources and Environment. National forests are managed under the principles of multiple use and sustained yield to meet the diverse needs of the American people. Under the multiple-use principle, the Forest Service is required to plan for six renewable surface uses—outdoor recreation, rangeland, timber, watersheds and water flows, wilderness, and wildlife and fish. Under the sustained-yield principle, the agency is required to manage its lands to provide high levels of these uses to current users while sustaining undiminished the lands’ ability to produce these uses for future generations. It implements these principles using a planning mechanism mandated by the National Forest Management Act, which requires each forest or group of small forests to develop a plan for all uses. This plan must be revised at least every 15 years. This plan, together with the individual projects undertaken to implement it, must comply with various environmental laws establishing standards or procedures designed to protect individual resources, such as threatened and endangered species and water and air quality. In 1992, the Forest Service adopted a management approach for sustaining multiple forest uses called ecosystem management. This management approach recognizes that protecting individual resources under the various environmental laws, as well as ensuring the long-term ability of the land to produce goods and services, requires sustaining the functioning of ecosystems. Ecosystems comprise interdependent biological components (plants and animals, including humans), that interact with their physical environment (soil, water, and air) to form distinct ecological units that span both federal and nonfederal lands. Through these interactions, the components of ecosystems tend to become arranged in distinctive kinds of biological structures, such as different types of forest tree stands. These different ecosystem structures, in turn, are capable of providing different kinds and levels of resources for human use, including timber or water. Natural disturbances, such as fires, floods, windstorms, or droughts, can temporarily affect ecosystem structures. However, these structures are generally resilient over time, recovering and persisting because they have evolved to survive the particular patterns of disturbance common to a given geographical area. Human technology, however, can create rapid, intense, or large-scale disruptions in ecosystem structures. A disruption, such as the elimination of an important biological component, can sometimes alter an ecosystem structure beyond its ability to recover quickly or at all, making the ecosystem unstable or unsustainable and ultimately transforming it into a different kind of ecosystem with different kinds of biological structures. Such a changed ecosystem will provide different kinds or levels of uses from those that humans previously enjoyed and expected. In 1997, the Forest Service identified, as a mission-related, strategic goal, achieving healthy and sustainable ecosystems through conserving and restoring ecosystem structures. A specific objective under this broad goal was restoring or protecting the ecological conditions of forested ecosystems to maintain their components and their capacity for self-renewal. In recent years, several analyses of conditions on national forests of the interior West by agency and outside experts have cited evidence of increased levels of insect and disease infestations; changes in the composition of tree and other forest plant species, including invasion by nonnative plants; increases in the density of tree stands and undergrowth; and increases in the number of small trees. These tree stand conditions have sometimes been referred to collectively as “forest health” problems. At the same time, the term “forest health” has been applied to concerns over declining species, habitat, and watershed conditions on national forests, and some environmental groups have argued that forest health should incorporate these concerns. Numerous administrative appeals and judicial actions have been filed by these groups out of concern that efforts to improve the health of tree stands—which would be implemented, in part, through timber harvesting—may exacerbate problems affecting species, habitat, or watersheds. The Forest Service has also noted a lack of scientific consensus on, or community awareness and acceptance of, the actions needed to address forest health problems, the size of the areas needing to be addressed, and the time frames for taking action. Thus, despite the widespread use of the term in recent years, there is little agreement on a definition of forest health, a standard for measuring it, the appropriate areas and time frames for addressing it, and the actions needed to achieve it. Many Forest Service staff and others feel that, because of its vagueness and subjectivity, the concept is often difficult to use effectively. Forest Service and outside scientists believe that a useful method of assessing a forest’s health and functioning is to compare the current conditions of its components and structures to the range of conditions they have exhibited in the past. This range–within which conditions have varied over time in response to disturbance patterns common to a given area—is referred to by scientists as their historical range of variability. Examining the historical range of variability of a forest’s tree stands is believed to be an especially useful starting point for analyzing the forest’s overall health and functioning because (1) tree stands are the defining biological structures of forested versus other kinds of ecosystems and (2) the conditions of these structures greatly determine the capacity of a forest not only to produce timber, but also to maintain soils, watershed conditions, and wildlife and fish habitats. The historical range of variability of a forest’s tree stands is identified by examining historical and biological evidence—such as early pioneers’ reports, old photographs, tree rings, and soil layers—to discover what biological components and structures have characterized the forested ecosystem at different times in its natural history. About 60 percent of all national forests and about 70 percent of their total acreage are located in the dry, inland portion of the western United States (hereafter referred to as the “interior West”). This region of the country, depicted in figure 1.1, generally extends north and south from the Canadian to the Mexican border and east and west from the Black Hills in South Dakota to the Cascade mountain range in Washington and Oregon and to the southwestern deserts and the Coastal range in California. Distinct ecological processes—driven largely by climate and topography—shaped the forests of the interior West, producing tree stands that differed in composition and structure from those in other regions of the country. Historically, frequent, low-intensity wildfires played a major role in determining the dispersion and succession of tree stands in the interior West. A lack of rainfall across the interior West generally also slows the decomposition of dead and downed trees and woody material there. The most common type of forested lands on national forests of the interior West are at warm, dry, lower elevations and are generally dominated by ponderosa pine. These are known as “frequent fire interval” forests because, before pioneers settled in these areas, fire historically occurred in them about every 5 to 30 years. Because frequent fires kept these forests clear of undergrowth, fuels seldom accumulated, and the fires were generally of low intensity, largely consuming grasses and undergrowth and not igniting the highly combustible crowns, or tops, of large trees. Figure 1.2 shows the widespread distribution of these “frequent fire interval” forests. In contrast, fire historically occurred only about every 40 to 200 years in the cooler, moister, forests at higher elevations, such as those around Yellowstone National Park, which are generally dominated by lodgepole pine. These forests historically developed more dense stands, and fires there generally killed nearly all of the trees. Finally, because the national forests of the interior West are attractive for recreation and aesthetic enjoyment, population has grown rapidly along their boundaries in recent years, creating an area termed the “wildland/urban interface.” Figure 1.3 shows the location of areas in the interior West with recent high population growth in relation to the region’s national forests. As figure 1.3 shows, areas with higher population growth rates in the interior West over the period are generally concentrated close to national forests. In response to a request from the Chairman, Subcommittee on Forests and Forest Health, we examined (1) the extent and seriousness of problems related to the health of national forests in the interior West, (2) the status of efforts by the Department of Agriculture’s Forest Service to address the most serious of these problems, and (3) barriers to successfully addressing these problems and options for overcoming them. As agreed with the requester, to examine the extent and seriousness of problems related to the health of national forests in the interior West, we interviewed and obtained documents from agency officials at Forest Service headquarters, six regional offices with administrative responsibility for national forests located in the interior West, nine selected forests within these regions, and selected agency field research and analysis units. Our selection of agency field units was based on a judgmental sample, and the results may not always be representative of other agency units. The forests we visited included the Idaho Panhandle National Forest in Idaho, the Arapaho and Roosevelt National Forests in Colorado, the Lincoln National Forest in New Mexico, the Boise National Forest in Idaho, the Plumas National Forest in California, the Shasta-Trinity National Forest in California, the Tahoe National Forest in California, the Deschutes National Forest in Oregon, and the Umatilla National Forest in Oregon and Washington. At these forests, we visited numerous field locations in several ranger districts. We also visited the Tahoe Basin Management Unit, a unit that surrounds Lake Tahoe, straddling the California/Nevada border, and is managed separately. At many locations, we also interviewed and obtained documents from representatives of national and local industry and environmental organizations; other federal agencies; state, local and tribal governments; and academic and professional forestry policy analysis and technical experts. We also interviewed and obtained documents from representatives of American Forests; the Pinchot Institute for Conservation; the Society of American Foresters; the American Forest and Paper Association; the Western Governor’s Association; the Wilderness Society; the Sierra Club; Oregon State University; Colorado State University; the Universities of Arizona, Colorado, Idaho, Montana, and Northern Arizona; and the Ecological Society of America. We also examined numerous statutes, hearing records, regulations, and agency directives related to forest health issues, as well as legislative proposals, prior GAO reports, and studies by the Congressional Research Service. In our field visits, we sometimes also made visual inspections of, and queried agency officials about, forest conditions, their causes, and their significance, as well as obtained views on these issues from local outside parties active in forest issues. To examine the status of the Forest Service’s efforts to address the most serious problems related to forest health, we interviewed agency officials and outside parties, reviewed related agency program and budget data, and consulted numerous agency and outside studies of agency activities. To obtain a better understanding of what was involved in some of these activities, we also visited several field sites where such activities were either under way or had recently been completed. We also reviewed agency technical models and planning documents to assess the adequacy of prospective agency efforts and strategies and consulted with other parties to obtain their views on these subjects. As also agreed with the requester’s office, our review generally covered agency activities since 1993 and was focused on the role of tree stand conditions in forest health. To examine barriers to successfully addressing problems related to forest health and options for overcoming them, we reviewed numerous recent and ongoing draft studies by executive branch, agency headquarters and field unit, legislative, and outside task forces and commissions, as well as academic and professional journals, and we interviewed and obtained documents from agency officials and outside parties. With respect to estimates of costs for addressing these conditions, we reviewed agency data, estimates from the Congressional Research Service, and documents related to the agency’s fiscal year 1998, 1999, and 2000 budgets, as well as annual performance plan data prepared by the agency in conformance with the Government Performance and Results Act of 1993. During the course of our review, we periodically met with agency headquarters staff and discussed information we had obtained through our work. Although we did not independently verify the accuracy of the data the agency provided to us on acreage, conditions, activities, and costs, we did compare these data with numerous outside analyses and estimates, as well as discussed factors affecting the data’s accuracy with agency field and headquarters personnel. We found that those other sources generally corroborated the data the agency provided to us, and in no instances did any inconsistencies significantly affect or materially qualify any findings or conclusions that were based on the agency’s data. Our review was conducted from October 1997 through March 1999 in accordance with generally accepted government accounting standards. According to the Forest Service, about 39 million acres of tree stands on national forests of the interior West are at high risk of catastrophic fire, largely because the agency’s decades-old policy of suppressing historically occurring, periodic, small wildfires has led to unprecedented accumulations of flammable materials. As a result, wildfires have increased in number and size over the last decade and are increasingly difficult and costly to fight. While these conditions threaten the sustainability of forest resources, they also increasingly threaten human health, lives, property, and infrastructure in nearby communities. The window of opportunity for taking corrective action is estimated to be only about 10 to 25 years before widespread, unstoppable wildfires with severe immediate and long-term consequences occur on an unprecedented scale. According to the Forest Service, large areas of national forests in the interior West are not healthy. A key symptom of their poor health is denser tree stands—i.e., stands with many more small trees, undergrowth, and accumulated dead materials on the ground than were found in the past. Additionally, the proportion of less fire-tolerant species in these tree stands has increased, as has the incidence of some disease and insect infestations. Increased stand densities are often related to these changes in tree species, as is the increased incidence of insects and diseases. According to the Forest Service, a significant symptom of poor health on national forests in the interior West is the much greater density of stands now than in the past. For example, officials in the Lincoln National Forest told us that high stand density conditions exist on an estimated 79,712 acres—or 35 percent—of its mixed conifer forest; 19,099 acres—or 22 percent—of its ponderosa pine forest; and 576,622 acres—or 55 percent—of its pinyon-juniper forest. The proportion of stands with densely growing, small and medium-sized trees on the Idaho Panhandle National Forest is reported by the agency to be about 50 percent above average historical levels. An estimated 35 to 50 percent of the 700,000 acres of mixed conifer and ponderosa pine on the Deschutes National Forest have more trees per acre than normal and are at risk according to agency officials. A 1994 study of scientifically selected sites in Arizona indicated that the estimated density of trees on 70 sites in the Coconino National Forest had greatly increased (from 23 per acre in 1867 to 276 in 1990), as it had on 46 sites in the Kaibab National Forest (from 56 trees per acre in 1881 to 851 in 1990). By another measure, the estimated total cross-sectional area of trees, measured at 4.5 feet above the ground surface, had grown from about 25 square feet per acre to about 150 square feet on the first forest and from about 50 square feet per acre to over 150 square feet on the other forest over the same time periods. Figures 2.1 and 2.2, are photographs taken from the same spot on the Bitterroot National Forest in 1909 and 1989. They illustrate the dramatic change over the intervening 80 years from the historically more common, open, large tree structure of such forest stands to the more recent, typically denser structural conditions dominated by smaller trees. A second major symptom of health problems on national forests in the interior West that we visited was a change in the historical composition of tree species, often to a greater proportion of trees of less fire-tolerant species. For example, the historically prominent western larch species has been lost and replaced by other species of trees on 211,000 acres—or 69 percent of its historical acreage—on the Idaho Panhandle National Forest. Likewise, the ponderosa pine has been replaced by other species on 76,000 acres—or 67 percent of its historical acreage—on this forest. In many parts of Oregon’s Deschutes National Forest, ponderosa pine has also been replaced by Douglas fir and mixed conifers over the last few decades. A third major symptom of health problems on national forests in the interior West is the increase in some insect and disease infestations. For example, on the Lincoln National Forest in New Mexico, round-headed pine beetles have infested 49,495 acres—or 57 percent—of the forest’s ponderosa pine, while the western spruce budworm has infested 120,000 acres of its Englemann and blue spruce and Douglas and white fir. In addition, dwarf mistletoe disease has infested 55,563 acres—or 64 percent—of its ponderosa pine, and 113,875 acres—or about 50 percent—of its Douglas fir. The Douglas fir tussock moth damaged 250,000 acres on the Boise National Forest in Idaho, killing millions of trees. The Douglas-fir beetle and the fir engraver beetle killed many more trees in this same forest, and dwarf mistletoe is estimated to infest 119,012 acres—or 33 percent—of the Douglas fir; 78,636 acres—or 10 percent—of the ponderosa pine; and 43,376 acres—or 50 percent—of the lodgepole pine. Various defoliating insects infest about 20 percent of the Deschutes National Forest’s mixed conifer and ponderosa pine forest, and dwarf mistletoe disease infects about 40 percent of its mixed conifer and ponderosa pine. Root disease also affects about 20 percent of this forest and, according to Forest Service officials, it is a major problem on the Idaho Panhandle National Forest, as it is elsewhere in the interior West. In addition to these three symptoms of poor forest health, national forests in the interior West are facing invasions of nonnative plants and diseases that outcompete and displace native vegetation in many areas. For example, in the Lincoln National Forest, 12 aggressive nonnative plant species have been identified as occupying approximately 5,200 acres across two ranger districts. Forest officials saw such plants spread by 30 percent in the early 1990s and expect this trend to increase. Various noxious plants, such as knapweeds and thistles, were estimated in 1996 to cover at least 5,000 acres of the forests and grasslands of the Arapaho/Roosevelt National Forest, and are expected to nearly triple their coverage by the year 2000. On the Deschutes National Forest, native shrubs and plants associated with dominant tree species are being displaced by invasive nonnative noxious plants at a rate that forest officials estimate is tripling every year. Similarly, nonnative diseases, to which many native tree species have thus far evolved little resistance, have spread. For example, white pine blister rust, a disease accidentally introduced from Europe in 1910, primarily caused the loss of 656,000 acres—or 90 percent—of the western white pine forests on the Idaho Panhandle National Forest and 7,900 acres—or 64 percent—of the whitebark pine forests. The disease has also been found at every surveyed plot on the Boise National Forest, where the incidence of infection in tree stands varied and was as high as nearly 70 percent. This same disease was detected on the Lincoln National Forest in New Mexico in 1990. As early as the mid-19th century, European American settlers’ activities began to affect the interior West’s ecology, introducing changes that gradually weakened the health of the region’s national forests. These changes occurred in response to several factors that have generally excluded fire from these forests, preventing it from playing its historical role of limiting the forests’ density, clearing undergrowth and downed material, and influencing species composition. These factors include (1) extensive livestock grazing and changes in land use first introduced by European American settlers in the late 1800s, which not only eliminated much of the grass that historically carried fire through the forests’ undergrowth but also ended Native Americans practice of setting such fires for hunting game and other purposes; (2) past timber-harvesting methods that selectively removed the larger, more valuable, and more accessible trees or removed all of the trees from a timber-harvesting site at one time (clear-cutting), allowing other species to increase; and (3) invasions by nonnative plants, insects, and diseases. However, while these factors generally laid the groundwork for and set in motion significant changes in these forests’ ecologies, according to several studies, the primary factor currently contributing to unhealthy forests in the region has been the Forest Service’s decades-old policy of suppressing fire on the national forests. Fire suppression was first practiced to protect early settlements from the risk of uncontrollable wildfires. Later, it was used as an agricultural technique to increase the number of trees available for timber harvesting. But without frequent fires, vegetation accumulated so that many stands have become denser, and less fire-tolerant tree species have become more prevalent. As the forests’ density and composition have changed, stands have become more susceptible to drought and to the incidence of insects and disease, including native ones that have historically played an important role in the evolution—particularly in the decomposition and succession cycles—of forest tree stands. Native insects and diseases sustain the health of forest stands so long as their levels remain within their historical ranges of variability. But contiguous areas of dense stands provide opportunities for insects and diseases to exceed their historical ranges and spread across large areas. In addition, invasions by nonnative plants and diseases have sometimes exacerbated problems arising from the other causes. Current tree stand conditions and the continuing absence of historically occurring frequent wildfires threaten various national forest resources in the interior West. For example, according to a 1998 analysis by the Department of the Interior’s Fish and Wildlife Service, of the 146 threatened, endangered, or rare plant species found in the coterminous states for which there is conclusive information on fire effects, 135 species benefit from wildfire or are found in fire-adapted ecosystems. Furthermore, according to a 1994 Northern Arizona University study, increases in density and changes in species composition alter soil moisture, as well as the availability of nutrients and water for plants and animals, watershed functioning and stream flow, and water quality, affecting both terrestrial and aquatic species. Experts have also expressed concern about the possibility that such changes will accelerate mortality among the remaining older ponderosa pines and other trees. The Forest Service estimates that 39 million acres of national forestlands in the interior West are at high risk of catastrophic wildfire because of denser stands and related conditions. As a result, the number and size of large, intense fires have grown over the last decade, resulting in higher fire suppression and preparedness costs and greater damage. Such fires, which are increasingly unstoppable, threaten not only the sustainability of national forest resources, but also human health, lives, property, and infrastructure in nearby communities. Experts have estimated that a window of only 10 to 25 years is available for taking effective action before widespread, long-term damage from such fires occurs. In the currently denser stands of the national forests in the interior West, where many smaller dead and dying trees now often form fuel “ladders” to the crowns of larger trees—and where such stands are often continuous rather than separated by stands that have recently been thinned by fire—wildfires have increasingly become large, intense, and catastrophic. Our analysis of the Forest Service’s data shows that the agency was highly effective in suppressing fires on the national forests for about 75 years after 1910, reducing substantially the number of national forest acres burned annually, over 90 percent of which have been in the interior West. However, figure 2.3 shows that recently the agency’s efforts have been less effective. As figure 2.3 shows, over the last decade, the number of acres of national forestlands burned by wildfires has begun to increase, reversing the trend of the previous three-quarters of a century. This is because excessive accumulated fuels have made fires larger and more intense, as shown in figure 2.4. As shown in figure 2.4, since 1984, the average annual number of fires on national forests that burn 1,000 acres or more has increased from 25 to 80, and the number of total acres burned (including acres on nearby lands) by these fires has more than quadrupled, from 164,000 to 765,000. Since 1990, 91 percent of these large fires and 96 percent of the acres they burned were in the interior West. In 1995, the Forest Service estimated that 39 million acres, or about one-third of all lands it manages in the interior West—more than ever known before and more than in all other regions of the country combined—are now at high risk of large, uncontrollable, catastrophic wildfire. According to agency officials, virtually all of these lands are located in the lower-elevation, frequent-fire forests of the interior West that have historically been dominated by ponderosa pine. These forests are particularly susceptible to such fires because, as stated in a 1995 internal agency report, far more cycles of fire (up to 10) were suppressed in these forests than in the higher-elevation, lodgepole-pine-dominated forests—where generally only one or no fire cycle was suppressed. Figure 2.5 shows locations in the interior West identified by experts outside the Forest Service where the risks of fire have been rated medium or high. Areas currently at medium risk are included because fuels can further accumulate on them so that, over time, they may become high-risk areas. Compared with other forest fires, catastrophic wildfires burn many more acres, destroy much more timber and wildlife habitat, and subject exposed soils to substantial erosion during subsequent rains, damaging water quality. As a result, catastrophic wildfires compromise the forests’ ability to sustain timber, outdoor recreation, clean water, and other uses. These increasing numbers of larger, more intense fires also pose hazards to human health, safety, and property. For example, 14 firefighters lost their lives in the 1994 South Canyon Fire in Colorado, which—because of its size and intensity—was able to rapidly surround them. Although investigation reports of this fire did not identify fuel levels as a causal factor in the fatalities, they cited highly flammable and hazardous fuels as a contributing factor. This fire did not originate in a frequent-fire ponderosa stand, but in a stand of a different species, indicating that catastrophic wildfire hazards are not limited to stands dominated by ponderosa. The hazards to human health, life, and property are especially acute along the national forests’ boundaries, where population has grown rapidly in recent years—an area termed the “wildland/urban interface.” Because smoke from such fires contains substantial amounts of fine particulate matter and other hazardous pollutants, the fires can pose significant health risks to people living in this interface. Such fires also threaten infrastructure vital to nearby human communities. For example, the 1996 Buffalo Creek fire, which burned several thousand acres and threatened private property in the wildland/urban interface southwest of Denver, left forest soils subject to extreme erosion. Subsequent repeated rainstorms washed what ordinarily would have been several years’ worth of sediment into a reservoir that supplies Denver with water. As a result, the Denver Water Board has estimated that it will incur several million dollars in ongoing expenses for dredging the reservoir and treating the water—an amount several times greater than the cost of fighting the fire. The growing number of large wildfires and acres burned—coupled with the increasing complexity of suppression in the wildland/urban interface—has greatly increased the Forest Service’s costs of fighting fires, as shown in figure 2.6. As figure 2.6 indicates, from fiscal year 1986 through fiscal year 1994, the 10-year rolling average of annual costs for fighting fires grew from $134 million to $335 million in constant 1994 dollars, a 150-percent increase. Since 1990, 95 percent of these costs were incurred in the interior West. Moreover, as shown in figure 2.7, the costs associated with preparedness, including the costs of keeping equipment and personnel ready to fight fires, have also been increasing. As figure 2.7 indicates, for the 6 fiscal years from 1992 through 1997, fire preparedness costs increased by 72 percent, from $189 million to $326 million. However, even though expenditures for both suppression and preparedness have increased in recent years, the agency’s fiscal year 2000 budget proposal calls for maintaining the current funding levels for both. Given the growing threats of catastrophic wildfire, the agency’s budget proposal notes that maintaining the current funding level for preparedness will result in increased risks of injury and loss of life to both the public and firefighters. “Uncontrollable wildfire should be seen as a failure of land management and public policy, not as an unpredictable act of nature. The size, intensity, destructiveness and cost of . . . wildfires . . . is no accident. It is an outcome of our attitudes and priorities. . . . The fire situation will become worse rather than better unless there are changes in land management priority at all levels.” In the last decade, the Forest Service has undertaken several actions to better understand and reduce the threat of catastrophic wildfires on national forests in the interior West. The Congress has been increasingly supportive of these efforts. Nonetheless, the agency may not be able to achieve its announced goal of adequately resolving the problem by the end of fiscal year 2015. Our analysis of the agency’s plans and data indicates that as many as 10 million acres may remain at high risk at that time because the agency will need to divide its planned efforts and resources between reducing accumulated fuels on high-risk areas in the interior West and maintaining current low-risk conditions on other national forestlands. In recent years, the Forest Service has taken steps to address the increasing threat of catastrophic wildfires on national forests. For instance, in 1990, the agency, along with other federal and state agencies, initiated a forest health monitoring program to better identify tree stand conditions, including outbreaks of insects and diseases and dead trees. In 1995, it announced its intention to refocus its fire management program on reducing accumulated fuels. Specifically, in a 1995 report, the agency recommended increasing the number of acres on which accumulated fuels are reduced annually from about 570,000 to about 3 million by fiscal year 2005. In 1997, the Chief of the Forest Service said it was the agency’s intention to implement this recommendation, and the agency plans to continue reducing fuels on 3 million acres per year through fiscal year 2015. By that time, the agency believes that it will have adequately reduced the current high risks to national forestlands of uncontrollable, highly destructive wildfires. To implement its increased emphasis on reducing accumulated fuels, the Forest Service restructured and redefined its fiscal year 1998 budget for wildland fire management to better ensure that funds are available for these activities. In fiscal year 1998, it announced that the funds appropriated for reducing fuels would be allocated to (1) protect high-risk wildland/urban interfaces, with special emphasis on areas subject to frequent fires; (2) reduce accumulated fuels within and adjacent to wilderness areas; and (3) lower the expected long-term costs of suppressing wildfires by restoring and maintaining fire-adapted ecosystems. In addition, the Forest Service has identified reducing accumulated fuels on the national forests as a key measure of its performance in accomplishing its high-priority, long-term strategic goal of restoring and protecting forested ecosystems. In the past 5 years, the Forest Service—either alone or with the Department of the Interior and other federal agencies—has issued several reports (1) addressing the health of forests in the interior West as well as in other regions of the country, including the health effects of fire suppression and (2) proposing management approaches to more efficiently and effectively reduce accumulated fuels. The agency has also (1) revised its wildland fire management policy to more clearly spell out its responsibilities and reimbursable costs so that nonfederal parties can understand the consequences of not working with the agency to reduce the risk of wildfire on their adjacent lands and (2) proposed a number of demonstration projects in collaboration with willing nonfederal partners to demonstrate the role of mechanical methods (including timber harvesting) of removing materials to reduce accumulated fuels. The Congress has supported the Forest Service’s efforts to reduce accumulated fuels by, among other things, increasing the funding for this activity. In addition, in acting on the agency’s fiscal year 1998 budget, the House and Senate appropriations committees approved the Forest Service’s budget restructuring to better ensure that funds are available for reducing accumulated fuels. The committees also earmarked $8 million in fiscal year 1998 for the agency and the Department of the Interior to begin a multiyear program, called the Joint Fire Science Program, to gather consistent information on accumulated fuels and ways to reduce them. In January 1998, the agencies issued a plan for conducting this program.This plan called for the Forest Service and Interior to conduct and sponsor research and analysis projects aimed at better understanding (1) the location and extent of problems with accumulated fuels, (2) the effects on other resources of different approaches to reducing these fuels, (3) the relative cost-effectiveness of these different approaches, and (4) the importance of compatible interagency approaches to monitoring and reporting efforts to reduce fuels. Recently, the initial projects under this multiyear program were authorized and begun. Additionally, the Congress, in its fiscal year 1999 appropriation to the Forest Service, approved the agency’s request to conduct “stewardship contracting demonstration projects” in collaboration with willing nonfederal partners. These projects are intended to demonstrate the role of mechanical methods (including timber harvesting) of removing materials to reduce accumulated fuels. The Congress also authorized the Forest Service, in implementing these demonstration projects, to experiment with alternative contracting procedures. Although the Forest Service, with the active support of the Congress, is taking steps to address the growing risks of catastrophic wildfires on the national forests, it may not be able to adequately resolve the problem by the end of fiscal year 2015. In particular, the agency’s current plans may significantly underestimate the number of acres on which fuels must be reduced annually to adequately reduce fire hazards. Our analysis of the agency’s initial plans and data indicates that as many as about 10 million acres in the interior West may still have excessive fuel levels and still be at high risk of uncontrollable, catastrophic wildfire at the end of fiscal year 2015. This shortfall may occur largely because the Forest Service has not linked its criteria for allocating the funds appropriated to reduce accumulated fuels to its actual allocation of these funds. The current and planned allocations largely emphasize maintaining satisfactory conditions on lands outside the frequent-fire forests of the interior West that currently have low levels of accumulated fuels so that conditions on them do not also become hazardous. To maintain satisfactory conditions on these other forests, the Forest Service will need to continue reducing fuels on them, at a rate of about 1 million acres per year. Thus, the agency’s plans to reduce fuels nationally on 3 million acres per year will provide for only about 2 million acres on national forests in the interior West. This level of accomplishment will likely fall short of the levels needed to meet the agency’s goals for the interior West’s frequent-fire forests. Moreover, despite budget allocation criteria emphasizing the restoration of high-risk interface areas within the interior West’s frequent fire forest ecosystems, such restoration activities will be limited by incomplete information. As the agency noted in February 1999, it has not yet mapped these interface areas with the precision needed to identify and design individual high-priority fuel reduction projects. Additionally, despite earlier plans to steadily increase its fuel reduction efforts, the agency is now intending to scale back the work, according to its fiscal year 2000 budget proposal. Initially, it planned to increase its efforts nationwide from about 1.5 million acres in fiscal year 1999 to 1.8 million acres in fiscal year 2000, building toward 3 million acres per year by fiscal year 2005. However, in its recently proposed fiscal year 2000 budget, it called for reducing fuels on only 1.3 million acres, or on fewer acres than planned for the current fiscal year. However, it should be noted that the Forest Service could very likely substantially reduce fire hazards without reducing fuels on all 39 million acres currently at high risk of catastrophic fire. For example, it might be able to construct fuelbreaks—i.e., areas where excessive fuels have been removed in strategic locations to isolate areas that still have excessive fuels—and thus limit the spread of large fires. But the Forest Service has not yet developed a general strategy for selectively reducing fuels, nor for implementing any alternative strategic approach that would allow it to systematically assign priorities to areas and thus safely decide not to reduce fuels on some lower-priority areas. Until it develops such a strategy, it has no basis for eliminating any current high-risk areas from its fuel reduction efforts, nor can it adequately evaluate the relative effectiveness or efficiency of its current efforts. The Forest Service stated in 1996 that its forest planning efforts did not adequately consider historical fire disturbance cycles. The purpose of the Joint Fire Science Program is to obtain information critical to planning and undertaking effective agency actions. However, an agency official involved in implementing the program said 10 years will be needed to complete it and that, as it is completed, national forests will use its findings to amend or revise current individual forest plans. Efforts to revise forest plans can take several years. Progress to date in gathering data under the program has proved difficult. In September 1998, the agency said that under the Joint Fire Science Plan, it would complete an initial mapping of the locations and levels of existing hazardous conditions on national forests before the end of the year. However, in February 1999, the agency said that the results of initial efforts to map these conditions still needed additional review and that, even when the initial mapping was completed, the data would not yet be precise enough to provide a basis for ranking and designing site-specific fuel reduction projects. Although the Forest Service is experimenting with using this type of mapping information in conjunction with other, more local analyses to rank and design individual fuel reduction projects in the Idaho Panhandle area, it has not yet developed a consistent, agencywide mapping approach. The recently approved stewardship contracting demonstration projects—for testing new partnership and contracting procedures for reducing fuels—are in the initial selection and analysis stage. Critical to the usefulness of these demonstration projects will be the Forest Service’s development, at their outset, of a common framework for systematically evaluating their effectiveness. Such a framework is necessary for the agency to gather and summarize consistent information on the projects’ implementation, results, and lessons learned so that the lessons can be applied more generally to the agency’s future fuel reduction efforts. However, no common evaluation framework has been developed yet, even though many of the demonstration projects are soon to be implemented. Without adequate data, the Forest Service has not been able to develop a cohesive strategy for addressing numerous policy, programmatic, and budgetary factors that present significant barriers to the accomplishment of its fuel reduction goals. These factors include (1) difficulties in reconciling needed actions with other legislatively mandated stewardship objectives to protect resources, (2) program incentives that tend to focus on areas of that may not present the greatest wildfire hazards, (3) statutorily defined contracting mechanisms that do not facilitate the removal of many hazardous fuels, and (4) costs for reducing fuels on high-risk areas that may be as high as $12 billion between now and the end of fiscal year 2015. The agency has not systematically identified the steps or activities to be undertaken in order overcome these barriers, nor has it developed a schedule for accomplishing them. Methods to reduce fuels can be difficult to reconcile with agencies’ other responsibilities. In dense tree stands, fires are difficult to control and may escape. In addition, controlled burning on a scale consistent with that of historically frequent fires is difficult to use without violating air quality standards established under the Clean Air Act. However, mechanically removing fuels (through commercial timber harvesting, among other means), can also adversely affect wildlife habitat and water quality in many areas and, in any event, areas with commercially valuable timber are often not those where the greatest wildfire hazards exists. In addition, the agency’s fuel reduction program rewards managers for the number of acres on which they reduce fuels, without taking into account the relative hazards on those acres; it does not reward managers for reducing fuels on the most hazardous acres. Finally, the agency’s statutorily defined contracting mechanisms were primarily designed for removing high-value timber, not excess accumulated fuels that are generally low in value and can be costly to remove. As a result, the cost to the Forest Service for reducing fuels on the 39 million acres at high risk may be about $12 billion between now and the end of fiscal year 2015, or an average of about $725 million annually, and these costly activities will have to be repeated in the future. Activities for reducing accumulated fuels can sometimes be difficult to reconcile with other legislatively mandated stewardship objectives, including meeting clean water quality standards and protecting threatened and endangered species. According to an agency official, in the past, the Forest Service sometimes used chemicals (herbicides) to kill undergrowth, which could then be burned. Combining these two methods was often less costly than mechanically removing the undergrowth. The agency has, however, largely stopped using herbicides because of concerns about their adverse effects on water quality and human health. Additionally, because large ponderosa pine trees were selectively harvested and fire was suppressed in the Deschutes National Forest in Oregon, ponderosa stands have largely been replaced by abnormally dense stands of Douglas fir. However, many of the Douglas fir stands cannot be removed because they now provide habitat for the threatened northern spotted owl, whose naturally occurring habitat on the western side of the Cascade mountain range has been significantly reduced by timber harvesting. Many agency and outside experts believe that, ultimately, avoiding catastrophic wildfires and restoring forest health in the interior West will require reintroducing fire through burning under controlled conditions to reduce fuels. However, the use of controlled fire in the interior West has two limitations. First, winter snows limit the time available for burning, and dry summer weather creates a high risk that, given massive levels of accumulated fuels, controlled fires will escape and become uncontrollable, catastrophic wildfires. Second, several officials and experts we spoke with believe that emissions from controlled fires on the scale that is needed to adequately reduce fuels would violate federal air quality standards under the Clean Air Act. Hence, in their view, the act would not permit the desired level of burning either immediately or possibly even in the long term. The Forest Service and the Environmental Protection Agency, which administers the Clean Air Act, are currently conducting a 3-year experiment to better determine the impact of emissions from controlled fires. For these reasons, many experts agree that fuels must be reduced in most areas of the interior West, at least initially, by mechanical means, including commercial timber harvesting, in conjunction with controlled burning. The Forest Service currently uses its timber sales management program to reduce accumulated fuels. However, the use of timber harvesting to reduce fuels has been limited by concerns about its adverse effects on other stewardship objectives. Specifically, in fiscal year 1997, timber harvesting was used to reduce fuels on only about 95,000 acres, or fewer than 5 percent of the acres on which fuels will need to be reduced annually to achieve the agency’s long-term goal. Forest Service officials told us that it was not likely that commercial timber harvesting could be increased enough to adequately reduce fuels on the vast acreage needing such reductions. Moreover, mechanical removals under both the timber sales management program and the fuel reduction program funded by appropriations currently involve incentives that tend to focus efforts on areas that may not present the greatest fire hazards. For example, under its fuel reduction program, the Forest Service’s lone performance indicator measures the number of acres treated. Agency field staff told us that funding for forests often depends on their ability to contribute to the agency’s acreage targets. As a result, forest staff often focus on areas where the costs of reducing fuels are low so that they can reduce fuels on more acres, rather than on those areas with the highest fire hazards, including especially the wildland/urban interfaces. These high-hazard areas often have significantly higher per-acre costs because of limitations on the use of less expensive controlled fires as a tool to reduce the accumulated fuels. Although the Forest Service is considering making changes to its current performance indicator, it has not yet done so. Timber harvesting may make useful contributions to reducing accumulated fuels in many circumstances. However, reducing fuels with the funds allocated for timber sales management may also provide an incentive for forests to focus on less critical areas. The Forest Service stresses that its timber sales management program is increasingly being used for efforts to improve forest health, including efforts to prevent catastrophic fires. The agency relies on timber production to fund many of its programs and activities, and all three of its budget allocation criteria for timber activities relate solely to the volume of timber produced or offered. As a result, as forest officials told us, they tend to (1) focus on areas with high-value commercial timber rather than on areas with high fire hazards or (2) include more large, commercially valuable trees in a timber sale than are necessary to reduce the accumulated fuels. Similarly, an interagency team that reviewed the implementation of the Emergency Salvage Timber Sale Program observed that some Forest Service personnel focused more on harvesting timber than on protecting forested ecosystems. This tendency of some agency personnel was further documented in a 1999 report by the Department of Agriculture’s Office of Inspector General. Most of the trees that need to be removed to reduce accumulated fuels are small in diameter and have little or no commercial value. For example, to return experimental forest plots near Flagstaff, Arizona, to historical conditions, 37 tons per acre of nonmarketable trees and vegetation had to be disposed of by being placed in a pit and burned. However, the agency’s largely statutorily defined contracting procedures were not designed to (1) facilitate the systematic removal of large volumes of low-value material over a number of years, (2) readily combine funds for conducting timber sales with funds for reducing accumulated fuels, or (3) allow contractors to retain this low-value material to partially offset the costs of its removal. More specifically, the agency’s two principal contracting procedures for removing materials from national forests are (1) competitively bid timber sale contracts under which the party removing the material purchases it at fair market value and expects to sell it for a profit and (2) service contracts, funded by appropriations, which do not involve selling the material, but merely paying a contractor for removing it. The National Forest Management Act of 1976 generally does not allow materials worth more than $10,000 to be removed from national forests under service contracts; instead, such materials must generally be removed under competitively bid timber sale contracts. However, low-value materials are unattractive to timber purchasers. As a result, the value of this contracting procedure for reducing low-value fuels is quite limited. While the materials to be removed may not be valuable enough for contractors to make a profit by purchasing them, the materials often have some lesser value. If purchasers could keep this material, they could apply its lesser value to offset at least part of their costs for removing it. They could then charge the Forest Service less for removal, saving the government money while reducing fuels on more acres for any given level of appropriated funding. However, the agency generally does not have the authority to trade goods (in the form of low-value forest materials) for a service (such as removing them). Because of these restrictions, in 1998, Agriculture’s Office of General Counsel determined that only 6 of 23 projects proposed by the Forest Service to demonstrate, among other things, the role of timber harvesting in reducing accumulated fuels, could proceed under the agency’s existing statutory authority. The remaining projects would, among other things, have involved removing material of greater total value than is allowed under service contracts or letting contractors keep some material in exchange for removing it. In the Fiscal Year 1999 Omnibus Consolidated and Emergency Supplemental Appropriations Act, the Congress authorized the Forest Service, through fiscal year 2002, to enter into 28 individual demonstration project contracts under which (1) the value of the material removed may be used by the contractor to offset the costs of removal, and (2) there is no limitation on the value of the material to be removed. However, the more general authority temporarily granted to the agency in the early 1990s to enter into “land stewardship contracts”—under which contractors were allowed to retain material they removed in exchange for achieving desired conditions on the national forests—has not been renewed. Because the materials removed through fuel reduction efforts often have low or no value, the revenue they generate will not cover the costs of their removal. Consequently, agency officials and outside analysts agree that reducing accumulated fuels in the interior West is likely to require hundreds of millions of dollars a year in appropriated funds. Our preliminary analysis of the Forest Service’s fuel reduction costs—which, according to the agency’s data average about $320 per acre for the combination of burning and mechanical removal that is necessary in the interior West—indicates that as much as $12 billion, or about $725 million a year, may be needed to treat the 39 million acres at high risk of uncontrollable wildfire by the end of fiscal year 2015. These costs might be less if the agency reduced current hazards on the 39 million acres selectively, in accordance with a systematic strategy and set of priorities. For fiscal year 1999, the agency requested and received $65 million to reduce accumulated fuels—or less than one-tenth of the annual level that may be needed to accomplish its goal. At that time, it projected that it would increase its request to $102 million for fiscal year 2000, in keeping with its announced intention to increase its fuel reduction efforts through fiscal year 2015. However, in its recently released fiscal year 2000 budget request, the agency instead asked for the same $65 million it received for fiscal year 1999. The agency stated that, because fuels have already been reduced on the least costly areas, this funding level will provide for even fewer acres than it did in the previous year. Moreover, our analysis of the costs to reduce fuels on national forest acres identified as being at high risk examined only the “first-time” costs of reducing fuels on them. Fuels will have to be reduced periodically in order to maintain forest health. For example, in 1998, the Wenatchee National Forest in Washington stated that it would have to begin reducing fuels on areas treated only 10 to 15 years ago because undergrowth had accumulated in the interim, posing new fire hazards. Forest Service officials we spoke with agreed with a 1997 observation by the Secretary of the Interior that substantial efforts to reduce fuels will have to be repeated three to five times or more on these lands over many decades, although the later repetitions may be less costly. We have previously noted that the Forest Service lacks accountability in implementing its ecosystem management approach to ensure sustainable multiple uses of the national forests. Specifically, we noted that (1) its goals and objectives under this approach are not linked to performance measures to ensure their accomplishment and (2) it lacks a goal or schedule for achieving accountability for its performance. This observation applies equally to the agency’s efforts to address the threat posed by catastrophic wildfires to ensuring sustainable multiple uses. For instance, as noted in this report, the incentive implicit in its current performance measure for fuel reduction tends not to focus activities on the most hazardous areas. Thus, the agency has no meaningful performance measure and goal related to reducing catastrophic wildfire hazards. Such a meaningful performance measure and goal are critical if the agency is to develop a cohesive strategy for reducing accumulated fuels and be held accountable for accomplishing this strategy. According to Forest Service officials, the agency has not established such a meaningful performance measure and goal for reducing fuels because it lacks sufficient data on the location of acres in national forests at high risk of catastrophic fire, as well as on the cost-effectiveness and effects on other resources of methods for reducing them. Our observations at the forests we visited confirmed this lack of data. Forest officials could only estimate or tell us in general terms how many acres they believed were at such risk, but could not identify particular high-risk locations or high-priority areas with any significant precision. Agency officials believe that having such data, which the Joint Fire Science Program is intended to identify, will better enable them both to develop a meaningful performance goal and measure and to better reconcile different fuel reduction approaches with other stewardship objectives. Similarly, they believe that data from the stewardship contracting demonstration projects will help them identify changes in statutorily defined contracting procedures that would better facilitate the accomplishment of fuel reduction goals. However, the agency has not systematically identified a cohesive set of activities or steps that it will undertake to obtain needed data, better reconcile objectives, or identify desirable changes in contracting procedures. Nor has it outlined a schedule for accomplishing these tasks. We believe that the threats and costs associated with increasingly uncontrollable, catastrophic wildfires, together with the urgent need for action to avoid them, make them the most serious immediate problem related to the health of national forests in the interior West. We also believe that the activities planned by the Forest Service may not be sufficient and may not be completed during the estimated 10- to 25-year “window of opportunity” remaining for effective action before damage from uncontrollable wildfires becomes widespread. The tinderbox that is now the interior West likely cannot wait that long for a cohesive strategy to be implemented. Simply allowing nature to take its inevitable course may cost more—not only for fire suppression, but also in human lives and damage to natural resources, human health, property, and infrastructure—than would undertaking strategic actions now. The increasing number of uncontrollable and often catastrophic wildfires in the interior West, as well as the significant costs to reduce growing hazards to natural resources and human health, safety, property, and infrastructure, present difficult policy decisions for the Forest Service and the Congress: Does the agency request, and does the Congress appropriate, the hundreds of millions of dollars a year that may be required to fund an aggressive fuel reduction program? If enough is not appropriated, what priorities should be established? How can the need for reintroducing fire into frequent-fire forests and mechanical removals best be reconciled with meeting air quality standards and other stewardship objectives? What incentives and changes in statutorily defined contracting procedures are needed to facilitate the mechanical removal of low-value materials? Such decisions should be based on a sound strategy that, in turn, depends in large part on data being gathered under the Forest Service and Interior’s Joint Fire Science Program and the Forest Service’s stewardship contracting demonstration projects. With these data, the agency will be able to establish more meaningful performance measures, priorities, and goals for reducing fuels. It will also be better able to (1) reconcile different fuel reduction approaches with its other stewardship objectives, (2) identify changes in incentives and statutorily defined contracting procedures that will better facilitate the accomplishment of fuel reduction goals, and (3) determine the associated costs of different options for doing so. All of these elements will be essential in the more cohesive agency strategy needed to address the problem of catastrophic wildfires now threatening the sustainability of multiple national forest uses and the security of human life, health, property, and infrastructure in communities near those forests. However, because of concerns about the agency’s accountability, we believe that the credibility of its efforts to devise such a strategy hinge upon the establishment of a clearly understood schedule for expeditiously developing and implementing this strategy. We recommend that the Secretary of Agriculture direct the Chief of the Forest Service to develop, and formally communicate to the Congress, a cohesive strategy for reducing and maintaining accumulated fuels on national forests of the interior West at acceptable levels. We further recommend that this strategy include (1) specific steps for (a) acquiring the data needed to establish meaningful performance measures and goals for reducing fuels, (b) identifying ways of better reconciling different fuel reduction approaches with other stewardship objectives, and (c) identifying changes in incentives and statutorily defined contracting procedures that would better facilitate the accomplishment of fuel reduction goals; (2) a schedule indicating dates for completing each of these steps; and (3) estimates of the potential and likely overall and annual costs of accomplishing this strategy based on different options identified in the strategy as being available for doing so. The following are GAO’s comments on the Forest Service’s letter dated March 22, 1999. 1. Our report notes that there is a lack of consensus on what constitutes forest health. We have added language in our report to incorporate the agency’s observation that greater community awareness and acceptance of needed actions are important elements in implementing a successful fuel reduction strategy. Moreover, we believe that the agency, through improving the cohesiveness of its strategy, may provide communities and those concerned about forest health with a clearer basis for both reaching consensus on and accepting needed actions. 2. We do not presume that there is a broad scientific consensus surrounding appropriate methods or techniques for dealing with fuel build-up or agreement on the size of the areas where, and the time frames when, such methods or techniques should be applied. Our report recognizes that the agency is currently pursuing better answers to these questions through the Joint Fire Science Program and other efforts, and we have added clarifying language in our report to incorporate the agency’s observation. 3. We agree that the other forest management activities, identified by the Forest Service as contributing to overall forest health and as having an impact on acres at risk of wildfire, should not overlooked and can be important elements in the agency’s more cohesive strategy. For instance, our report notes important interrelationships that the agency must consider when balancing fuel reduction goals with other stewardship objectives, such as preserving air and water quality. 4. We agree that expanding the Forest Service’s fuel reduction program over the next few decades could significantly reduce the risk of high-intensity fire and allow for the successful suppression of wildland fire in areas where fuels have been reduced. However, as noted in our report, the agency’s planned expansion of this program is not on schedule, and its fiscal year 2000 budget request, compared with its fiscal year 1999 appropriation, will provide for reducing fuel on fewer acres, rather than on more, as originally planned. We believe this change demonstrates the need for the agency to better identify estimates of potential and likely costs to accomplish a more cohesive strategy as recommended in our report. 5. We did not evaluate the relationship between specific funding levels for the Forest Service’s initial responses to wildfires and the resulting likelihood of acreage lost to catastrophic wildfire. However, our report notes that the agency’s fiscal year 2000 budget request will only maintain current funding level for preparedness, not increase the funding for it. According to the agency, maintaining the current funding level will increase the risks of injuries and loss of life to the public and firefighters next year. We believe this statement further supports our recommendation that the agency needs to better identify estimates of potential and likely costs to accomplish a more cohesive strategy. 6. Our report notes that fuel reduction is not required on every national forest acre currently at high risk of catastrophic wildfire and that blocks where fuels have been reduced, called fire breaks or fuel breaks, may prevent fires from reaching high intensity or large size. However, we also note that the Forest Service has not yet developed a general strategy for constructing such fire breaks, nor for implementing any alternative strategic approach that would allow it to systematically assign priorities to areas and thus safely avoid reducing fuels on some of them. Until the agency develops such a strategy, it has no basis for eliminating any current high-risk areas from its fuel reduction efforts, nor can it adequately evaluate the relative effectiveness or efficiency of its current efforts. 7. We agree that some of the acres at high risk will burn in the interior West, thereby reducing fuels on them and lowering the total number of acres remaining at high risk. However, as we point out in our report, in many areas fuels will have to be reduced repeatedly. Moreover, as our report points out, the concern about catastrophic wildfires is not just how many acres they burn, but where those acres are located. In particular, future catastrophic wildfires that (1) burn many acres in the wildland/urban interface, taking lives and damaging human health, property, or infrastructure; (2) destroy critical terrestrial or aquatic habitat; or (3) needlessly destroy timber available for harvest should be considered as part of the problem rather than as contributions to reducing it. 8. We agree with the agency that it is important to maintain current satisfactory conditions in regions other than the frequent-fire forests of the interior West, including the Forest Service’s Southern Region, so that fire risks in these areas do not also become hazardous to resources or people, as many areas in the interior West are now. We also do not question the level of funding for fuel reduction efforts in these other regions. Our report states, instead, that the acres in these other regions on which it plans to maintain the current lower fuel levels must be taken into account when determining the adequacy of the agency’s plans to reduce fuels on a total of 3 million acres nationally each year. 9. We do not disagree that Joint Fire Science Program’s projects are currently planned to be completed in 3 to 5 years. Instead, our report notes an agency official’s estimate of how long they may actually take. In our view, the project’s experience to date with mapping fire risks suggests that tasks under this program may, in fact, take longer than currently planned. This task, which was originally scheduled for completion in November 1998, is now, according to the agency’s comments on our draft report, not projected to be completed until September 2000. Finally, we note that the plan adopted in 1998 for carrying out the program provides for members of its governing board to serve for 10 years. 10. We did not assess the extent to which the increase in the acreage burned in the interior West over the last few years can be partly attributed to more flexible suppression strategies. Nor do we question whether such strategies may be an important element in the agency’s overall strategy to reduce fuels. However, regardless of the reasons for the increases in the acreage burned, substantially more acres are now burning unintentionally, with increasing costs and threats to resources and people. The agency has on several occasions concurred that this is a serious problem. For instance, as we note in our report, the agency has stated in its fiscal year 2000 budget request that the risks of injuries and loss of life to the public and firefighters will increase next year. Finally, we agree that these more flexible suppression strategies are not acreage driven, but hazard based. However, as we point out in our report, current incentives in the agency’s main fuel reduction program are acreage driven, not hazard based, and incentives in its timber program are largely driven by commercial rather than safety considerations. Our report urges the development of a more cohesive fuel reduction strategy that addresses ways to better integrate these incentives around hazard reduction. 11. The Forest Service is correct in pointing out that the level of fuels was not specifically identified as a cause of the fatalities in the investigative reports on this fire and that the predominant vegetation type was not, in this case, long-needle pine. However, according to the investigative reports we reviewed, this was a very large, intense fire that spread to the canopy (i.e., crowns of the trees), and highly flammable and hazardous fuels were a significant contributor to the fatalities. While our report notes that long-needle pines such as ponderosa are a predominant forest type at lower elevations in the interior West, the example serves to point out that catastrophic wildfire hazards on national forests of the interior West are not limited to this forest type. Our report considers all wildfire hazards in the region and is not limited to fire hazards associated with any specific type of tree stand or vegetation. Our purpose in citing this example was simply to demonstrate that large, intense fires occurring on the interior Western national forests can be life threatening, irrespective of all of their causes and sources. We have added the language in the report to reflect the agency’s comment about the fire and clarify the scope of our report. Ryan T. Coles Susan L. Conlon Charles S. Cotton Elizabeth R. Eisenstadt Lynne L. Goldfarb Brent L. Hutchison Chester M. Joy Hugo W. Wolter, Jr. Doreen Stolzenberg Feldman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the extent and seriousness of forest-health-related problems in national forests in the interior West; (2) the status of efforts by the Forest Service to address the most serious of these problems; and (3) barriers to successfully addressing these problems and options for overcoming them. GAO noted that: (1) the most extensive and serious problem related to the health of national forests in the interior West is the overaccumulation of vegetation, which has caused an increasing number of large, intense, uncontrollable, and catastrophically destructive wildfires; (2) according to the Forest Service, 39 million acres in national forests in the interior West are at high risk of catastrophic wildfire; (3) past management practices, especially the Forest Service's decades-old policy of putting out wildfires in the national forests, disrupted the historical occurrence of frequent low-intensity fires, which had periodically removed flammable undergrowth without significantly damaging larger trees; (4) because this normal cycle of fire was disrupted, vegetation has accumulated, creating high levels of fuels for catastrophic wildfires and transforming much of the region into a tinderbox; (5) the number of large wildfires, and of acres burned by them, has increased over the last decade, as have the costs of attempting to put them out; (6) these fires not only compromise the forests' ability to provide timber, outdoor recreation, clean water, and other resources, but they also pose increasingly grave risks to human health, safety, property, and infrastructure, especially along the boundaries of forests, where population has grown significantly in recent years; (7) during the 1990s, the Forest Service began to address the unintended consequences of its policy of putting out wildfires; (8) in 1997, it announced its goal to improve forest health by resolving the problems of uncontrollable, catastrophic wildfires in national forests by the end of fiscal year 2015; (9) to accomplish this goal, it has: (a) initiated a program to monitor forest health; (b) refocused its wildland fire management program to increase the number of acres on which it reduces the accumulated vegetation that forms excessive fuels; and (c) restructured its budget to better ensure that funds are available for reducing these fuels; (10) Congress has supported the Forest Service's efforts by increasing the funds for reducing fuels and authorizing a multiyear program to better assess problems and solutions; (11) the Forest Service has not yet developed a cohesive strategy for addressing several factors that present significant barriers to improving the health of the national forests by reducing fuels; and (12) many acres of national forests in the interior West may remain at high risk of uncontrollable wildfire at the end of fiscal year 2015.
moved from retrospective, cost-and-charge-based reimbursements to prospective systems and fee schedules designed to contain cost growth. The August 1997 passage of BBA dramatically changed the existing paradigm, setting Medicare on a course toward a more competitive and consumer-driven model. HCFA, the agency charged with administering the program, must accomplish this transition while continuing to oversee the processing of about 900 million claims annually. BBA contained over 350 separate Medicare and Medicaid mandates, the majority of which apply to the Medicare program. The Medicare mandates are of widely varying complexity. Some, such as the Medicare+Choice expansion of beneficiary health plan options and the implementation of PPSs for SNFs, home health agencies, and hospital outpatient services, are extraordinarily complex and have considerable budgetary and payment control implications. Others, such as updating the conversion factor for anesthesia payments, are relatively minor. Although most implementation deadlines are near term—over half had 1997 or 1998 deadlines—several are not scheduled to be implemented until 2002. Overall, BBA required HCFA to implement about 240 unique Medicare changes. Since August 1997, about three-quarters of the mandates with a July 1998 deadline have been implemented. HCFA’s recent publication of the Medicare+Choice and SNF PPS regulations are examples of the progress HCFA has made in implementing key mandates. The remaining 25 percent missed the BBA implementation deadline, including establishment of a quality-of-care medical review process for SNFs and a required study of an alternative payment system for certain hospitals. It is clear that HCFA will continue to miss implementation deadlines as it attempts to balance the resource demands generated by BBA provisions with other competing objectives. BBA-mandated changes. Finally, the need to modernize its multiple automated claims processing and other information systems, a task complicated by the Year-2000 computer challenges, is competing with other ongoing responsibilities. HCFA has proposed that the Department of Health and Human Services seek legislative relief by delaying implementation of certain BBA provisions—those requiring major computer system changes that also coincide with Year-2000 computer renovations. According to HCFA’s computer contractor, simultaneously pursuing both BBA implementation and Year-2000 system changes risks the failure of both activities and threatens HCFA’s highest priority—uninterrupted claims payments. The contractor advised HCFA to seek relief from competing requirements, which could allow the agency to focus instead on Year-2000 computer system renovations. The BBA provisions to be delayed by the computer renovations include updates to the October 1999 inpatient hospital PPS rate and the January 2000 physician fee schedule, hospital outpatient PPS limits on outpatient therapy services, and billing changes for SNFs. The appendix lists other BBA mandates that are being postponed. the new PPS rates, which cover both services previously billed by the SNF and by certain outside providers. Without this provision, it may be more difficult to adequately monitor whether bills for SNF residents are being submitted appropriately. BBA establishes a new Medicare+Choice program, which will significantly expand the health care options that can be marketed to Medicare beneficiaries beginning in the fall of 1998. In addition to traditional Medicare and HMOs, beneficiaries will be able to enroll in preferred provider organizations, provider-sponsored organizations, and private fee-for-service plans. Medical savings accounts will also be available to a limited number of beneficiaries under a demonstration program. The goal is a voluntary transformation of Medicare via the introduction of new plan options. Capitalizing on changes in the delivery of health care, these new options are intended to create a market in which different types of health plans compete to enroll and serve Medicare beneficiaries. Recognizing that consumer information is an essential component of a competitive market, BBA mandated a national information campaign with the objective of promoting informed plan choice. From the beneficiary’s viewpoint, information on available plans needs to be (1) accurate, (2) comparable, (3) comprehensible, and (4) readily accessible. Informed beneficiary choice will be critical since BBA phases out the beneficiary’s right to disenroll from a plan on a monthly basis and moves toward the private sector practice of annual reconsideration of plan choice. campaign” that includes comparative data on the available health plan choices. This publicity campaign will support what is to become an annual event each November—an open enrollment period in which beneficiaries may review the options and switch to a different health plan. As in the past, health plans will continue to provide beneficiaries with marketing information that includes a detailed description of covered services. In fact, HCFA comparative summaries will refer beneficiaries to health plans for more detailed information. HCFA is taking a cautious approach and testing the key components of its planned information campaign. This caution is probably warranted by the important role played by information in creating a more competitive Medicare market and by the agency’s inexperience in this type of endeavor. In March 1998, the agency introduced a database on the Internet called “Medicare Compare,” which includes summary information on health plans’ benefits and out-of-pocket costs. The toll-free telephone number will be piloted in five states—Arizona, Florida, Ohio, Oregon and Washington—and gradually phased in nationally during 1999. Because of some concerns about its readability, HCFA has also decided to pilot a new beneficiary handbook in the same five states instead of mailing it to all beneficiaries this year. The handbook, a reference tool with about 36 pages, will describe the Medicare program in detail, providing comparative information on both Medicare+Choice plans as well as the traditional fee-for-service option. For beneficiaries in all other states, HCFA will send out a five- to six-page educational pamphlet that explains the Medicare+Choice options but contains no comparative information. This schedule will allow HCFA to gather and incorporate feedback on the effectiveness of and beneficiary satisfaction with the different elements of the information campaign into its plans for the 1999 open enrollment period. comparative information on Medicare HMOs. Among other things, we recommended that HCFA produce plan comparison charts and require plans to use standard formats and terminology in benefit descriptions. In developing comparative information for Medicare Compare, HCFA attempted to use information submitted by health plans as part of the contracting process. Like beneficiaries, HCFA had difficulty reconciling information from different HMOs because it was not standardized across plans. HCFA’s Center for Beneficiary Services, the new unit responsible for providing information to Medicare enrollees, has been forced to recontact HMOs and clarify benefit descriptions. Recognizing that standardized contract information would reduce the administrative burden on both health plans and different HCFA offices that use the data, the agency has accelerated the schedule for requiring standard formats and language in contract benefit descriptions. Although originally targeted by 2001, the new timetable calls for contract standardization beginning with submissions due in the spring of 1999. If available on schedule, standardized contracts should facilitate the production of comparative information for the introduction of the annual open enrollment period in November 1999. that the use of nonformulary drugs may result in substantially higher out-of-pocket costs. Only five of eight Tampa plans mention mammograms in their benefit summaries—even though all plans covered mammograms. Most plans listed mammograms under the “preventive service” benefit category. One plan, however, included them under hospital outpatient services. Consistent presentation is important because beneficiaries may rely on plans’ benefit summaries when comparing coverage and out-of-pocket cost information. Federal employees and retirees can readily compare benefits among health plans in the Federal Employees Health Benefits Program because the Office of Personnel Management requires that plan brochures follow a common format and use standard terminology. It is encouraging that HCFA wants to accelerate a similar requirement for Medicare+Choice plans. In the fall of 1999, HCFA expects to require health plans to use standard formats and terminology to describe covered services in the summary-of-benefits portion of the marketing materials. Comparative data on quality and performance are a key component of the information campaign mandated by BBA and an essential underpinning of quality-based competition. Recognizing that the measurement and reporting of such comparative data is a “work in progress,” the act directed broad distribution of such information as it becomes available. Categories of information specifically mentioned by BBA include beneficiary health outcomes and satisfaction, the extent to which health plans comply with Medicare requirements, and plan disenrollment rates. While disenrollment rates could be prepared for publication in a matter of months, other types of quality-related information have accuracy or reliability problems or are still being developed. immature health plan information systems and ambiguities in the HEDIS measurement specifications. Though committed to making the HEDIS information available as quickly as possible, HCFA emphasized that its premature release would be unfair to both plans and beneficiaries. Finally, efforts have been under way for some time to develop measures that actually demonstrate the quality of the care delivered—often referred to as “outcome” measures. As noted, the current HEDIS measures look at how frequently a health plan delivers specific services, such as immunizations, not at outcomes. The development and dissemination of reliable health outcome measures is a much more complicated task and remains a longer-term goal. Before passage of BBA, HCFA had funded a survey to measure and report beneficiaries’ satisfaction with their HMOs. For example, Medicare enrollees were asked how easy it was to gain access to appropriate care and how well their physicians communicated with them about their health status and treatment options. HCFA plans to make the survey results available on its Medicare Compare Internet site this fall and to include the data in mailings to beneficiaries during the fall 1999 information campaign. We believe that the usefulness of HCFA’s initial satisfaction survey for identifying poor performing plans is limited because it surveyed only those individuals satisfied enough with their plan to remain enrolled for at least 12 months. HCFA is planning a survey of those who disenrolled, which could help distinguish among the potential causes of high disenrollment rates in some plans, such as quality and access issues or beneficiary dissatisfaction with the benefit package. Houston, Texas, the highest disenrollment rate was nearly 56 percent, while the lowest was 8 percent. The large range in disenrollment rates among HMOs suggests that this single variable could be a powerful tool in alerting beneficiaries about potentially significant differences among plans and the need to seek additional information before making a plan choice. Questions have been raised by health plan representatives and others about the estimated cost of the information campaign. The campaign is to be financed primarily from user fees—that is, an assessment on participating health plans. We are conducting a review of HCFA’s information campaign plans at your request and that of the Senate Committee on Finance. Our work began recently, and since then HCFA has modified its plans significantly, affecting the estimated costs of different components. While we cannot yet make an overall assessment, it is clear that the operation of the toll-free number is the most expensive component and, because of a lack of prior experience, is the most difficult cost to estimate. The cost of the toll-free number comprises 44 percent of the total information campaign budget. HCFA projects fiscal year 1998 costs of $50.2 million to support set up as well as operations during fiscal year 1999. All but $4 million will come from user fees collected from existing Medicare HMOs. For fiscal year 2000, operations costs are projected to grow to $68 million. important that the toll-free number meet beneficiaries’ reasonable needs or expectations. However, until HCFA actually gains experience with the toll-free number, it has no firm basis to judge either the duration of the calls or the type of information beneficiaries will find useful. The phased implementation of the toll-free numbers should give HCFA a better idea of what beneficiaries want and may necessitate adjustments to current plans. Ultimately, the design of this and other aspects of the information campaign should be driven less by cost and more by how effective they are in meeting beneficiary needs and contributing to the intended transformation of the Medicare program. Consequently, we will be looking at (1) whether the estimated cost of the planned activities is appropriate and efficient in the near term, and (2) whether, over the longer term, the impact and effectiveness of these activities might be increased. On July 1, 1998, HCFA began phasing in a Medicare PPS for SNFs, as directed by BBA. Under the new system, facilities receive a payment for each day of care provided to a Medicare-eligible beneficiary (known as the per diem rate). This rate is based on the average daily cost of providing all Medicare-covered SNF services, as reflected in facilities’ 1995 costs. Since not all patients require the same amount of care, the per diem rate is “case-mix” adjusted to take into account the nature of each patient’s condition and expected care needs. Previously, SNFs were paid the reasonable costs they incurred in providing Medicare-allowed services. There were limits on the costs that were reimbursed for the routine portion of care, that is, general nursing, room and board, and administrative overhead. Payments for capital costs and ancillary services, such as rehabilitation therapy, however, were virtually unlimited. Cost-based reimbursement is one of the main reasons the SNF benefit has grown faster than most components of the Medicare program. Because providing more services generally triggered higher payments, facilities have had no incentive to restrict services to those necessary or to improve their efficiency. care for beneficiaries for less than the case-mix adjusted payment will benefit financially. Those with costs higher than the per diem amount will be at risk for the difference between costs and payments. The PPS for hospitals is credited with controlling outlays for inpatient hospital care. Similarly, the Congressional Budget Office (CBO) estimates that over 5 years the SNF PPS could save $9.5 billion compared with what Medicare would have paid for covered services. Although HCFA met the deadline for issuing the implementing regulations for the new SNF per diem payment system, features of the system and inadequate data used to establish rates could compromise the anticipated savings. As noted in previous testimony, design choices and data reliability are key to implementing a successful payment methodology. We are concerned that the system’s design preserves the opportunity for providers to increase their compensation by supplying potentially unnecessary services. Furthermore, the per diem rates were computed using data that overstate the reasonable cost of providing care and may not appropriately reflect the differences in costs for patients with different care needs. In addition, as a part of the system, HCFA’s regulation appears to have initiated an automatic eligibility process—that is, a new means of determining eligibility for the Medicare SNF benefit, that could expand the number of beneficiaries who will be covered and the length of covered stays. The planned oversight is insufficient, increasing the potential for these aspects of the regulations to compromise expected savings. Immediate modifications to the regulations and efforts to refine the system and monitor its performance could ameliorate our concerns. (physical, occupational, or speech therapy), to assign patients to the different groups. Categorizing patients on the basis of expected service use conflicts with a major objective of a PPS—to break the direct link between providing services and receiving additional payment. A SNF has incentives to reduce the costs of the patients in each case-mix group. Because the groups are largely defined by the services the patient is to receive, a facility could do this by providing the minimum level of services that characterize patients in that group (see table 1). This would reduce the average cost for the SNF’s patients in that case-mix group, but not lower Medicare payments for these patients. For patients needing close to the maximum amount of therapy services in a case-mix group, facilities could maximize their payments relative to their costs by adding more therapy so that the beneficiary was categorized in the next higher group. An increase in daily therapy from 140 to 144 minutes, for example, would change the case-mix category of a patient with moderate assistance needs from the “very high” to the “ultra high” group, resulting in a per diem payment that was about $60 higher. By thus manipulating the minutes of therapy provided to its rehabilitation patients, a facility could lower the costs associated with each case-mix category and increase its Medicare payments. Rather than improve efficiency and patient care, this might only raise Medicare outlays. care needed using methods that are less susceptible to manipulation by a SNF. Nevertheless, being able to classify patients appropriately is critical to ensuring that Medicare can control its SNF payments and that SNFs are adequately compensated for their mix of patients. We are also concerned that the data underlying the SNF rates overstate the reasonable costs of providing services and may not appropriately reflect costs for patients with different care needs. The rates to be paid SNFs are computed in two steps. First, a base rate reflecting the average per diem costs of all Medicare SNF patients is calculated from 1995 Medicare SNF cost report data. This base rate may be too high, because the reported costs are not adequately adjusted to remove unnecessary or excessive costs. Second, a set of adjustors for the 44 case-mix groups is computed using information on the costs of services used by about 4,000 patients. This sample may simply be too small to reliably estimate these adjustors. Most of the cost data used to set the SNF prospective per diem rates were not audited. At most, 10 percent of the base year—1995—cost reports underwent a focused audit in which a portion of the SNFs’ expenses were reviewed. Of particular concern are therapy costs, which are likely inflated because there have been no limits on cost-based payments. HCFA staff report that Medicare has been paying up to $300 per therapy session. These high therapy costs were incorporated in the PPS base rates. Even if additional audits were to uncover significant inappropriate costs, HCFA maintains that it has no authority to adjust the base rates after the July 1, 1998, implementation of the new payment system. The adjustors for each category of patients are based on data from two 1-day studies of the amount of nursing and therapy care received by fewer than 4,000 patients in 154 SNFs in 12 states. Almost all Medicare patients will be in 26 of the 44 case-mix groups. For about one-third of these 26 groups, the adjustors are based on fewer than 50 patients. Given the variation in treatment patterns among SNFs, such a small sample may not be adequate to estimate the average resource costs for each group. As a result, the case-mix adjusted rates may not vary appropriately to account for the services facilities are expected to provide—rates will be too high for some types of patients and too low for others. Medicare’s SNF benefit is for enrollees who need daily skilled care on an inpatient basis following a minimum 3-day hospitalization. Before implementation of the prospective per diem system, SNFs were required to certify that each beneficiary met these criteria. With the new payment system, the method for establishing eligibility for coverage will also change. Facilities will assign each patient to one of the case-mix groups on the basis of an assessment of the patient’s condition and expected service use, and the facility will certify that each patient is appropriately classified. Beneficiaries in the top 26 of the 44 case-mix groups will automatically be deemed eligible for SNF coverage. If facilities do not continue to assess whether beneficiaries meet Medicare’s coverage criteria, “deeming” could represent a considerable new cost to the program. Some individuals who are in one of these 26 deemed categories may only require custodial or intermittent skilled care, but HCFA’s regulations appear to indicate that they could still receive Medicare coverage. Medical review nurses who work with HCFA payment contractors indicated in interviews that some patients included in the 26 groups would not necessarily need daily skilled care. This may be particularly true at a later point in the SNF stay, since SNF coverage can only begin after a 3-day hospitalization. Individuals with certain forms of paralysis or multiple sclerosis who need extensive personal assistance may also need daily skilled care immediately following a hospital stay for pneumonia, for example. After a certain period, however, their need for daily skilled care may end, but their Medicare coverage will continue because of deeming. Similarly, certain patients with minor skin ulcers will be deemed eligible for Medicare coverage, whereas previously only those with more serious ulcers believed to require daily care were covered. Thus, more people could be eligible and Medicare could be responsible for longer stays unless HCFA is clear that Medicare coverage criteria have not been changed. Deeming eligibility would not be a problem if all patients in a case-mix group met Medicare’s coverage criteria. To redefine the patient groups in this way would require additional research and analysis. However, an immediate improvement would be for HCFA to clarify that Medicare will only pay for those patients that the facility certifies meet Medicare SNF coverage criteria. Whether a SNF patient is eligible for Medicare coverage and how much will be paid are based on a facility’s assessment of its patients. Yet, HCFA has no plans to monitor those assessments to ensure they are appropriate and accurate. In contrast, when Texas implemented a similar reimbursement system for Medicaid, the state instituted on-site reviews to monitor the accuracy of patient assessments and to determine the need for training assessors. In 1989, the first year of its system’s operation, Texas found widespread over-assessment. Through continued on-site monitoring, the error rate has dropped from about 40 percent, but it still remains at about 20 percent. The current plans for collecting patient assessment information actually discourage rather than facilitate oversight. A SNF will transmit assessment data on all its patients, not just those eligible for Medicare coverage, to a state agency that will subsequently send copies to HCFA. However, the claim identifying the patient’s category for Medicare payment is sent to the HCFA claims contractor that pays the bill. At the time it is processing the bill, the claims contractor will not have access to data that would allow confirmation that the patient’s classification matches the assessment. To some extent, the implementation of the SNF prospective per diem system reduces the opportunities for fraud in the form of duplicate billings or billing for services not provided. Since a SNF is paid a fixed per diem rate for most services, it would be fraudulent to bill separately for services included in the SNF per diem. Yet, the new system opens opportunities to mischaracterize patients or to assign them to an inappropriate case-mix category. Also, as was the case with the former system, methods to ensure that beneficiaries actually receive required services could be strengthened. As with the implementation of any major payment policy change, HCFA should increase its vigilance to ensure that fraudulent practices discovered in nursing homes, similar to problems noted in our prior work, do not resurface. BBA workload alone, implementation delays were probably inevitable. And now, HCFA has been advised by its contractor that its highest priority—uninterrupted claims processing through the timely completion of Year-2000 computer renovations—may be jeopardized by some BBA mandates that also require computer system changes. Though HCFA is implementing what will become an annual information campaign associated with Medicare+Choice, it has little experience in planning and coordinating such an undertaking. The ability of the campaign to provide accurate, comparable, comprehensive, and readily accessible information will help to determine the success of the hoped for voluntary movement of Medicare beneficiaries into less costly, more efficient health care delivery systems. While BBA computer system-related delays may jeopardize some anticipated program savings, slower Medicare expenditure growth is also at risk because of weaknesses in the implementation of other mandates. HCFA could take short-term steps to correct deficiencies in the new SNF PPS. However, longer-term research is needed to implement a payment system that fully realizes the almost $10 billion in savings projected by CBO. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you or Members of the Subcommittee may have. Collection of non-inpatient encounter data from plans SHMO: Plan for integration of part C and SHMO Medicare subvention: Project for military retirees Reporting and verification of provider identification numbers (employer identification numbers and Social Security numbers) Maintaining savings from temporary reductions in capital payments for PPS hospitals SNF consolidated billing for part B services Payment update for hospice services Update to conversion factor 1/1/99Implementation of resource-based practice expense RVUs Implementation of resource-based malpractice RVUs Prospective payment fee schedule for ambulance services Application of $1,500 annual limit to outpatient rehabilitation therapy services (continued) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Health Care Financing Administration's (HCFA) implementation of Medicare provisions contained in the Balanced Budget Act of 1997 (BBA), focusing on: (1) an overview of how HCFA's implementation has progressed since GAO's earlier testimony; (2) the efforts to inform Medicare beneficiaries about the expanded health plan choices available to them in 1999, commonly referred to as the information campaign; and (3) the prospective payment system (PPS) for skilled nursing facilities (SNF), which began a 3-year phase-in in June 1998. GAO noted that: (1) HCFA is making progress in meeting the legislatively established implementation schedules; (2) since the passage of BBA in August 1997, almost three-fourths of the mandates with a July 1998 deadline have been implemented; (3) however, HCFA officials have acknowledged that many remaining BBA mandates will not be implemented on time; (4) HCFA maintains that these delays will have a minimal impact on anticipated Medicare program savings; (5) given the concurrent competition for limited resources and the differing importance and complexity of the many BBA mandates, the success or failure of HCFA's implementation efforts should not be judged solely on meeting deadlines; (6) rather, any assessment should consider whether the agency is meeting congressional objectives while taking a reasoned management approach to identifying critical BBA tasks, keeping them on track, and integrating them with other agency priorities; (7) complying with the BBA mandate to conduct an information campaign that provides beneficiaries with the tools to make informed health plan choices poses significant challenges for HCFA and participating health plans; (8) in implementing the Medicare plus Choice program, HCFA must now assemble the necessary comparative information about these options and find an effective means to disseminate it to beneficiaries; (9) a parallel goal of the information campaign is to give beneficiaries information about the quality and performance of participating health plans to promote quality-based competition among plans; (10) HCFA has accelerated its goals for obtaining standardized information from plans, and GAO believes health plan disenrollment rates provide an acceptable short-term substitute measure of plan performance; (11) the campaign is to be financed primarily from user fees; (12) HCFA has met the July 1, 1998, implementation date for phasing in a new payment system for SNFs; (13) GAO is concerned, however, that payment system design flaws and inadequate underlying data used to establish payment rates may compromise the system's ability to meet the twin objectives of slowing spending growth while promoting the delivery of appropriate beneficiary care; (14) in the short term, the new payment system could be improved if HCFA clearly stated that SNFs are responsible for insuring that the claims they submit are for beneficiaries who meet Medicare coverage criteria; and (15) in the longer term, further research to improve the patient grouping methodology and new methods to monitor the accuracy of patient assessments could substantially improve the performance of the new payment system.
The United States and China have cooperated for over 35 years on science and technology initiatives. In 1979, the two countries signed a bilateral science and technology agreement that has served as an umbrella agreement for subsequent bilateral environment and energy initiatives. In 2008, the countries established the Ten Year Framework for Cooperation on Energy and Environment. This framework was intended to facilitate the exchange of information and best practices to develop solutions to the environment and energy challenges both countries face. The framework includes some action plans related to clean energy, such as plans for clean, efficient, and secure electricity; clean and efficient transportation; and energy efficiency. According to staff from think tanks and business associations and other individuals knowledgeable about U.S.-China clean energy cooperation that we interviewed, U.S. cooperation with China on clean energy could yield benefits such as building trust between the countries, helping both countries advance their efforts to meet environmental challenges, and creating opportunities for U.S. businesses in China. According to these individuals, the sharing of any IP through this cooperation is a potential risk due to possible IP theft. In November 2014, the two countries’ presidents issued a U.S.-China Joint Announcement on Climate Change, which included targets to reduce greenhouse gas emissions in the United States, and for China to intend to reach peak carbon dioxide emissions around 2030 and increase the share of non-fossil fuels in its energy consumption. The announcement also emphasized the countries’ commitment to a successful climate agreement at the United Nations Climate Change Conference in Paris in 2015, and the countries’ presidents reaffirmed this commitment in a U.S.-China Joint Presidential Statement on Climate Change in September 2015. In December 2015, more than 190 member states under the United Nations Framework Convention on Climate Change came together to adopt the Paris Agreement, which aims to hold the increase in the global average temperature to well below 2 degrees Celsius above pre-industrial levels through countries setting their own nonbinding targets for emissions reductions. In Paris, some of the countries that adopted the Paris Agreement also committed to initiatives to substantially increase public and private investment in climate change mitigation and adaptation activities. For example, through the Mission Innovation initiative, 20 countries, including the United States and China, will seek to double their governmental clean energy research and development investment over 5 years to accelerate clean energy innovation and make it widely affordable. In addition to its bilateral cooperation with China, the United States cooperates bilaterally and multilaterally with other countries on initiatives related to climate change and clean energy. For example, the United States has a Clean Energy Dialogue with Canada to encourage the development of clean energy technologies to reduce greenhouse gases and combat climate change. Also, in 2009, the United States launched the Partnership to Advance Clean Energy with India, which is working to accelerate inclusive, low carbon growth by supporting research and deployment of clean energy technologies. Both the United States and China, along with 21 other countries and the European Commission, participate in the Clean Energy Ministerial, a high-level global forum to promote policies and programs that advance clean energy technology, to share lessons learned and best practices, and to encourage the transition to a global clean energy economy. The Clean Energy Ministerial is focused on improving energy efficiency worldwide, enhancing clean energy supply, and expanding clean energy access. As U.S. cooperation with China on science and technology has expanded over time, China’s protection of IP rights has been a persistent concern. Although some IP issues have been addressed through dialogues, such as the U.S.-China Joint Commission on Commerce and Trade, according to a 2016 report by the U.S. Trade Representative, the uncertain IP environment is a leading concern for businesses operating in China. According to the report, the theft of trade secrets remains a particular concern, and conditions are unlikely to improve as long as those committing such theft continue to operate with relative impunity. The report also identified concerns about reports that Chinese government policies may have negative impacts on U.S. investors and their IP rights, including that Chinese regulations, rules, and other measures appear to require foreign companies to transfer or license their IP rights to domestic Chinese entities in order to do business in China. U.S. clean energy companies may face particular IP concerns with regard to doing business in China. China’s 5-year plan for economic and social development initiatives for 2016–2020 includes developing its environmental technology industry as a focus area. The U.S. Trade Representative has expressed concern that China’s innovation-related and other industrial policies may have negative impacts on U.S. exports or IP in particular industries by encouraging actions that pressure foreign IP rights holders to transfer those rights to domestic Chinese entities. U.S. agencies obligated about $97 million for clean energy cooperation with China over the 8-year period of fiscal years 2008 through 2015. More than 90 percent of this money was obligated by three agencies: DOE, USTDA, and State. Two-thirds of the overall funding went to three key programs, which are the largest U.S.-China clean energy cooperative programs at each of these agencies. Almost half of the funding went to research and development, and the overall funding went to a variety of types of clean energy, with the majority for energy efficiency, clean coal, and clean vehicles. In total over the period encompassing fiscal years 2008 through 2015, U.S. agencies obligated about $96.9 million for U.S. agencies, other public entities, and private sector participants to cooperate with Chinese entities related to clean energy. DOE obligated the majority of this funding (71 percent). USTDA and State obligated another 13 percent and 11 percent, respectively. Two-thirds of the overall funding went to the largest related programs at each of these agencies (see fig. 1), which are the three key programs we focused on: DOE’s CERC program: Through CERC, DOE obligated $47.5 million for teams of U.S. scientists and engineers to perform research and development with China on clean energy technologies. This collaboration is being pursued for reasons beyond attempting to address climate change, including to improve air quality, to lower energy costs, and to promote energy security. The work through fiscal year 2015 was separated into three tracks focused on clean coal, clean vehicles, and energy efficiency in buildings. DOE funds U.S. researchers in each of the tracks, while the Chinese government funds the Chinese researchers, with the intention that U.S. and Chinese researchers will be working together and learning from each other on all projects. USTDA’s East Asia Program: Through its East Asia Program, USTDA obligated $12.5 million for U.S. companies to engage in various types of clean energy projects with China, such as feasibility studies, trade missions, and technical assistance. These projects have focused on a wide range of clean energy technologies related to smart grids, clean coal, and shale gas, among others. State’s CCWG program: Through CCWG, State obligated $5.8 million for U.S. participation in cooperation and dialogue with China on clean energy. Through fiscal year 2015, CCWG’s clean energy cooperation has occurred through groups of projects bundled into six initiatives: (1) heavy-duty and other vehicles; (2) smart grids; (3) carbon capture, utilization, and storage; (4) energy efficiency in buildings and industry; (5) climate-smart and low-carbon cities; and (6) industrial boilers efficiency and fuel switching. In addition to the U.S. federal funding obligated to these key programs, U.S. private sector participants also cover a share of the costs of some projects. For CERC and USTDA’s East Asia Program, such cost-share increases the overall U.S. funding spent on these projects by approximately double. Agency officials have pointed out that cost-share from private companies shows the companies’ confidence in the programs’ ability to achieve results. Seven other agencies also engaged in clean energy cooperation with China during this period. The U.S. Agency for International Development obligated $5.5 million for two technical assistance programs in China, one focused on energy efficiency in buildings and another focused on various forms of clean energy development, such as financing clean energy projects. The Departments of Commerce and Transportation and the Federal Energy Regulatory Commission each obligated between $1,800 and $32,000 for clean energy cooperation with China during this period, mostly for travel expenses to attend events or consultations in China for regulatory cooperation. The Departments of Agriculture and the Interior and the Environmental Protection Agency participated in clean energy cooperation with China using funding provided by DOE or State. In addition, some agencies provided funding for their own travel expenses to attend related events or to organize some related activities but were unable to identify the amounts of such funding specifically related to clean energy cooperation. Annual U.S. obligations for U.S.-China clean energy cooperation varied in fiscal years 2008 through 2015. As figure 2 shows, large increases in annual obligations occurred in fiscal years 2010 and 2014, which were within the years following the launches of the CERC and CCWG programs, respectively. As seen in figure 3, U.S. government funding provided to clean energy cooperation with China supported numerous types of activities. Research and development: Almost half the funding was obligated by DOE for research and development to promote clean energy innovations, with most of that funding for CERC. According to DOE officials and CERC participants, through research and development under CERC in particular, U.S. participants gain important benefits, such as the ability to speed progress in their research through collaboration with other U.S. researchers and leading Chinese scientists and engineers and access to unique experimental platforms unavailable in the United States. In addition, U.S. companies obtain the opportunity to demonstrate the viability of their products in China’s large market. Information exchange: Another 26 percent of the funding supported different types of information exchange, including forums for technical discussion and regulatory cooperation. For example, there are annual meetings between the United States and China organized to discuss energy efficiency, renewable energy, and clean coal, and there have been other forums held to discuss topics such as biofuels, smart grids, and smart cities. According to agency officials, these forums have multiple benefits for U.S. participants, including opportunities to highlight U.S. businesses, to work toward harmonizing codes and standards between China and the United States, and to share regulatory best practices. Export promotion: Activities to promote U.S. exports received about 13 percent of the funding, all of which was from USTDA and included feasibility studies, trade missions, and some technical assistance. Feasibility studies help U.S. companies demonstrate the viability of their technologies to prospective Chinese buyers. Through trade missions, USTDA brings Chinese officials to the United States to observe the design, manufacture, and operation of U.S. clean energy technologies. Also for export promotion, USTDA provided technical assistance to Chinese officials through technical exchange, training, and standards development programs. USTDA funds all such projects with the intention to create U.S. exports while supporting China’s efforts to reduce carbon emissions through the deployment of clean energy technologies. Other types of activities: The remaining 12 percent of the funding went to other types of technical assistance and activities such as demonstration projects in China using advanced renewable energy technologies, surveys in northwestern China to identify sites for demonstrations of carbon capture and storage, a study of the shale gas potential in one Chinese province, strategy development, and training efforts to promote IP protection. U.S. government funding supported cooperation on a wide range of types of clean energy technologies. As seen in figure 4, the largest portions of funding went to energy efficiency, clean coal, and clean vehicle technologies, which related to the three areas that the CERC program focused on through fiscal year 2015. All three key programs have yielded some results. For example, CERC projects had led to the launch of 15 products by the end of 2015, including software for enhancing energy efficiency in buildings. In addition, by the end of fiscal year 2015, the 24 USTDA projects from its East Asia Program included in our review had generated about $230 million in U.S. exports, and the six CCWG initiatives we reviewed had trained 48 people on global climate change. In addition, the three programs have tools to monitor performance such as performance reports and program reviews. Generally, however, the three programs lack targets for their performance measures and USTDA does not have agency-wide targets. Agency officials provided various explanations for why it was difficult for them to set targets, including that CERC was a new program when it started work in 2011 and that USTDA is a demand- based agency. However, establishing targets for these programs, and for USTDA agency-wide, could help managers generate and communicate more meaningful performance information that they could also learn from to identify performance shortfalls and pinpoint options for improvement. Our analysis of the measures and documents used by the three programs to track performance at the program and lower levels shows that all of the programs have yielded some results, such as the number of products launched as a result of CERC, the dollar value of exports generated by USTDA’s projects, and the number of people trained on global climate change by CCWG initiatives. DOE’s CERC Program. CERC was announced in 2009 and work began on projects in 2011. As seen in table 1, at the program level, CERC has yielded results for select key performance measures through the end of 2015. According to CERC officials, CERC’s key performance measures are the program’s most important and relevant measures. Beyond the results tracked for these program-level performance measures, each track has also achieved significant technical outcomes, according to DOE. For example, the clean coal track used data from a Chinese power plant’s carbon dioxide capture process to model that system in a U.S. power plant and found that it could cost significantly less to capture carbon dioxide than initially estimated. Also, the energy efficiency in buildings track developed and commercialized a moisture and air sealant that reduces energy consumption and is environmentally friendly; and the clean vehicles track developed techniques to model hybrid powertrains for vehicles that are now being applied to design a hybrid light truck. USTDA’s East Asia Program. According to our analysis of USTDA project-level results, the 24 USTDA projects included in our review from the East Asia Program had generated about $230 million in U.S. exports through the end of fiscal year 2015. The exports generated as a result of these projects range from $160,000 from a feasibility study funded in fiscal year 2011 to almost $135 million from a feasibility study funded in fiscal year 2009. According to USTDA, these exports have supported about 1,500 U.S. jobs based on a Department of Commerce methodology for estimating U.S. jobs attributable to U.S. exports. Chinese officials to the United States to learn about U.S. shale gas and energy efficiency technologies (see bottom photo below), green buildings and city planning, and vehicle fuel economy standards. inform Chinese officials, such as through workshops on U.S. shale gas practices; developing a model for smart grids in China; and assisting the development of Chinese smart grid standards that would be harmonized with U.S. standards. Thus far, the completed USTDA clean energy projects included in our review have resulted in a U.S. export multiplier of about 36—for every dollar obligated by USTDA the agency identified about $36 in U.S. exports generated. This compares with USTDA’s overall multiplier, for fiscal year 2015, of $74 in U.S. exports for every dollar in agency funding. USTDA projects have also yielded results for project-level performance measures showing the projects' development impact on recipient countries. Examples of development impact results for the East Asia Program’s clean energy projects include projects that, individually, led to an estimated 200 permanent jobs in China, about 20 people in China receiving training and skill development, and 50 megawatts of new energy capacity, according to USTDA. State’s CCWG Program. CCWG was announced in 2013, and work for the initiatives covered in our review began in either fiscal year 2014 or 2015. Based on our aggregation of the targets and results of the six CCWG initiatives we reviewed, through fiscal year 2015, CCWG has yielded some progress related to seven of eight performance measures CCWG uses to monitor performance for these six initiatives, as shown in table 2. Results data are reported to State by DOE, the Department of Transportation, and the Environmental Protection Agency, which implement CCWG activities. Generally, any initiative-level targets set for these performance measures were designed to be met in late fiscal year 2016 or fiscal year 2017, and State officials said that they expect to see more results near the end of the initiatives, because that is when more activities are planned. In addition, one initiative had a change in its scope of work that has delayed its activities. CCWG initiatives have also achieved additional outcomes not captured by their initiative-level performance measures. For example, the energy efficiency in buildings and industry initiative developed three partnerships between U.S. and Chinese companies that could reduce their buildings’ energy use by 25 to 51 percent. Climate-Smart/Low-Carbon Cities: This industry: This initiative includes sharing best practices on energy performance contracting and energy efficiency upgrades (see right photo below). Industrial boilers efficiency and fuel switching: Through this initiative, U.S. and Chinese researchers conducted an assessment of China’s coal-fired industrial boilers and plan to implement identified strategies to improve their efficiency. Performance measure Amount of investment leveraged in U.S. dollars, from private and public sources, for climate change Number of laws, policies, strategies, plans, or regulations addressing climate change (mitigation or adaptation) and/or biodiversity conservation officially proposed or adopted Number of people receiving training in global climate change Number of person hours of training completed in climate change Number of days of technical assistance in climate change provided to counterparts or stakeholders Projected greenhouse gas emissions reduced or avoided through 2030 from adopted laws, policies, regulations, or technologies related to clean energy (measured in metric tonnes carbon dioxide) In addition, DOE and State officials said that their programs had achieved results related to the bilateral relationship with China that could not be quantified. For example, DOE and State officials said that the trust built between the United States and China on climate issues through the joint work and dialogue under CERC and CCWG, respectively, helped to enable the November 2014 U.S. and Chinese Presidents’ Joint Announcement on Climate Change. These officials said that this announcement helped catalyze the December 2015 Paris Agreement on climate change. All three programs monitor progress toward their goals through a variety of tools, such as performance reports and program reviews. Two of the programs also have performance measures reflecting their goals and collect data on some of those measures; however, none of the programs have targets for all their performance measures, which would enable them to compare the results that they have achieved with the results they had planned to achieve. To help manage program performance, linking goals to performance measures that are tracked against established targets is a leading practice for federal programs. In addition, USTDA did not have targets for most of its agency-wide performance measures. The GPRA Modernization Act of 2010 (GPRAMA) requires agencies to publish a performance plan that, among other things, contains performance measures with established targets that can be used to assess progress toward achieving those targets. CERC monitors program performance through a combination of routine reports and specific data requests. DOE requires that each track submit quarterly reports. Although most information in these reports is provided at the project level, these reports also contain information on some of the program’s performance measures, such as measures related to intellectual property creation. However, DOE officials largely collected information regarding CERC’s program performance through specific data requests, such as to prepare for meetings or program-level reports. DOE officials emphasized that they focus their performance monitoring at the project level, where there have been more than 80 projects within the three CERC tracks. Each project follows a 10-point plan describing, among other elements, the research objective, work schedule with interim milestones, and deliverables and dates. Officials said that these plans are the basis for the information in the quarterly reports and are how CERC holds projects accountable for their performance. At the project level, performance monitoring also occurs through review meetings, such as reviews by industrial partners and DOE management, peer review of projects under one of the tracks by DOE's Office of Energy Efficiency and Renewable Energy, and other technical reviews by that office as well as DOE's Office of Fossil Energy for projects under the remaining two tracks. See appendix III for more information on CERC’s organization and reporting relationships. DOE officials monitor CERC’s performance against four overarching goals that they said have been the objectives of CERC since it was established. Those goals are to accelerate development and deployment of clean energy technology; expand and strengthen bilateral engagement between the United protect intellectual property, encourage its development, and improve U.S.-China interactions regarding intellectual property; and facilitate market access to participating businesses to speed technology deployment. Officials said that they use 19 key performance measures, each linked to at least one of the four goals, to indicate progress toward those goals. However, during the first phase of CERC that ended in fiscal year 2015, none of these performance measures had targets. According to DOE officials, setting targets for CERC was difficult because it was a new program focused on a new model of collaborative research and development and they did not have enough information to create targets when it first started work in 2011. In addition, officials said it is difficult to know what a research and development program will accomplish before it begins. However, according to Office of Management and Budget guidance, agencies managing any research and development program should develop targets to measure progress toward its goals. CERC is a high-visibility program for U.S-China cooperation on clean energy, with the Secretary of Energy and his Chinese counterpart involved in annual program reviews. In addition, CERC is planning to start its second phase in 2016 and is in the process of developing new work plans for each track for this phase, according to DOE officials. If CERC does not have targets, as suggested by leading practices, managers may not have the information needed to make timely improvements to ensure that progress toward goals remains on track and to clearly communicate to DOE leadership how CERC is performing against its intended results. According to USTDA officials, the agency monitors performance of the agency’s East Asia Program through annual meetings during which all levels of USTDA staff review USTDA’s regional programs by sharing lessons across the programs and discussing program results. USTDA examines program efficacy by reviewing information on funded activities, countries and regions, and industry sectors. Officials said that USTDA’s Office of Program Monitoring and Evaluations provides the East Asia Program and the agency with data that can be used to examine program performance and identify areas for improvement. USTDA assesses its projects while they are ongoing and soon after they have been completed. These assessments focus on several areas, including the implementation potential of the project; feedback from project participants; and project impacts, such as U.S. exports and the development impact on the recipient country. USTDA also uses an independent evaluator to evaluate almost all its projects. These evaluations occur on an annual basis to determine whether the projects have resulted in additional exports or development impacts until USTDA determines that no further results are likely to occur, which can take 5 years or longer. USTDA follows these same monitoring practices for all of its programs throughout the agency. See appendix III for more information on USTDA’s organization and reporting relationships. USTDA has agency-wide goals used to evaluate its performance. Officials further stated that these goals flow down from the agency to the East Asia Program. The goals are to create U.S. jobs by supporting exports of U.S. goods and services for priority development projects in emerging economies, foster opportunities for U.S. small businesses through significant involvement in USTDA’s programs, and utilize evidence and evaluation data to guide agency programming decisions. Each agency-wide goal has associated performance measures. These same measures are also used to monitor the East Asia Program, according to USTDA officials. USTDA set a target for one of the agency- wide performance measures—to exceed the Small Business Administration’s benchmark of 23 percent of federal prime contracts awarded to U.S. small businesses—although USTDA officials said that they do not break down this target by program. None of the other agency-wide or program-level performance measures had targets, although the agency does set targets at the project level for some performance measures reflecting certain goals such as potential exports. USTDA officials said that there are several reasons why they do not have targets for most of their performance measures at the agency or program level. Because USTDA is a demand-based agency, with projects generally proposed by industry, officials said that it is difficult to know what kinds of projects will be proposed and ultimately approved and funded. Furthermore, officials said that having a precise target for each performance measure could produce a perverse incentive by encouraging them to fund a project in order to meet a given target, even if they did not think it was the project most worthy of being funded. USTDA officials are also concerned that targets would reduce their flexibility in allocating USTDA’s resources. For example, officials said that they have strategic reasons for investing in certain countries, including responding to U.S. government policy priorities, even if those projects will not necessarily produce the most exports, and targets could limit their ability to fund those projects. However, as GAO has previously reported, if an agency has measurable, balanced performance measures that cover all an agency’s priorities, this should prevent an overemphasis on one or two priorities at the expense of others that may skew an agency’s performance. Without published agency-wide targets, as required by GPRAMA, it is unclear if agency managers have the information they need to determine if they are making sufficient progress toward achieving their goals, to identify performance shortfalls and options for improvement, and to provide Congress and the public with information needed to enhance their oversight and better ensure the agency’s accountability. Furthermore, without targets at the program level, as suggested by leading practices, managers risk not being able to use all the information generated from long-term project evaluations to inform timely improvements, such as in deciding which types of projects to fund in particular countries or regions. State officials said that they monitor the performance of CCWG as a program through two reports that focus on initiative-level activities: (1) internal reports on the status of the CCWG initiatives that are presented annually to the U.S. Special Envoy for Climate Change and his Chinese counterpart and (2) public reporting of CCWG’s annual performance by initiative to the chairs of the U.S.-China Strategic and Economic Dialogue (S&ED). The reporting to the S&ED is CCWG’s main monitoring mechanism, according to State officials. As shown previously in table 2, the six CCWG initiatives within our review have performance measures that are tracked at the initiative level. These initiatives are implemented by other federal agencies that are required to report semiannually to either State’s Bureau of Oceans and International Environmental and Scientific Affairs (OES), which oversees five of the CCWG initiatives under our review, or State’s Bureau of Energy Resources (ENR), which oversees one of the CCWG initiatives. These reports include information on the results achieved for each relevant State performance measure as well as a narrative describing the initiative’s key activities over the reporting period. According to State officials, OES and ENR use this information as inputs to standard Department of State reporting of performance by bureau or for the whole agency; however, CCWG does not use these performance measures to monitor performance at the program level. See appendix III for more information on CCWG’s organization and reporting relationships. CCWG works toward an overarching goal negotiated with China, which is to facilitate constructive U.S.-China cooperation and dialogue on climate change, but does not have program-level performance measures or targets, according to our review of CCWG documents and State officials. State officials said that CCWG does not have program-level performance measures or targets because the program is viewed as a cooperative effort with China, and it would be difficult to negotiate these elements with the Chinese government. State did not negotiate with the Chinese government the performance measures that it uses at the initiative level, nor does it share the resulting performance information with China, because these are for State’s internal use. In addition, because of the initiative-level and other reports, program-level performance information had not seemed necessary to monitor program progress, according to a State official. These reports show that CCWG is making progress from year to year, according to the State official who leads CCWG. However, without program-level performance measures with targets, as suggested by leading practices, CCWG program managers may lack an apt and adequate framework to determine the extent to which the results measured at the initiative level are yielding expected program results and whether any program improvements are needed. The State official who leads CCWG agreed that program-level performance measures and targets could be helpful for learning about CCWG’s performance, particularly if the performance measures chosen reflected CCWG’s broad goal of working constructively with China on climate change. DOE officials identified potential sharing of background IP—IP generated outside the scope of a research and development collaboration—and participants not having a clear plan for managing IP as risks to U.S. companies and researchers participating in CERC. DOE has taken steps to manage these risks, in part to enable participants to share background IP, which is important for valuable research and development, according to DOE officials. Although CERC participants reported no significant issues with DOE’s approach to managing IP risks, companies participating in CERC have been reluctant to share background IP as a part of CERC. As a result, U.S. CERC participants only shared background IP with Chinese organizations for 3 of the more than 80 projects that took place in the first 5-year phase of CERC. DOE officials acknowledged that companies participating in CERC face a tradeoff between the risk of sharing background IP and potential benefits, such as valuable research and development outcomes and gaining a market advantage through demonstrating projects in China. When CERC was first launched in 2009, DOE officials identified potential sharing of background IP and participants not having a clear plan for managing IP as risks to U.S. companies and researchers participating in CERC. DOE officials said that background IP needs to be protected in order for participants to bring their most creative ideas forward to facilitate joint research and development, which is important to the CERC goal of accelerating development and deployment of clean energy technology. According to DOE officials, strong protection of IP encourages innovation by allowing researchers to build on discoveries through lawful means, which accelerates further innovation and enables collaboration. DOE officials and almost all of the CERC participants we interviewed, including the lead organizations of the three tracks and several participants from each track, did not identify any other risks for CERC participants. DOE has taken steps to manage IP risk to CERC participants, which is in accordance with federal internal control standards for risk assessment. Specifically, DOE has taken steps to manage IP risks to CERC participants through the following means. The IP Annex to the CERC Protocol: This part of the CERC founding agreement attempts to help manage IP risk by defining how IP may be shared or licensed in each country. The U.S. Patent and Trademark Office has identified a potential discrepancy between Chinese law and the bilateral U.S.-China Science and Technology Agreement upon which the IP Annex to the CERC Protocol is based, according to U.S. Patent and Trademark Office officials. These officials stated that the potential discrepancy is related to ownership of any improvements made to IP licensed between U.S. and Chinese entities. The U.S. Patent and Trademark Office is discussing the matter with other agencies, including DOE. According to DOE, differences in the laws of the two countries with respect to intellectual property protection were considered and addressed when drafting the IP Annex to the CERC Protocol. In that regard, in order to specify IP rights in greater detail, the IP Annex to the CERC Protocol requires each CERC track to have a Technology Management Plan in place before work on projects can begin. Technology Management Plans: These plans, which are agreed to by all the participants in a CERC track, are intended to facilitate joint research and development and encourage information sharing by specifying IP rights in greater detail than the IP Annex to the CERC Protocol. According to DOE officials, the Technology Management Plans encourage sharing of background IP to research and development partners by setting up an IP framework in advance of work beginning on projects and making it clear that both governments have endorsed the Technology Management Plans. In addition, the Technology Management Plans state that participants shall negotiate in good faith to provide nonexclusive licenses for IP developed on joint projects with participants in the other country, as well as with third parties who are not participants. According to agency officials, this has not been the case in previous science and technology agreements between the United States and other countries. According to DOE officials, this provision was important because U.S. CERC participants were interested in being able to license IP to have market access in both countries. IP training workshops: CERC has conducted five IP training workshops to help participants understand IP sharing under CERC and relevant IP practices and laws in the United States and China. According to DOE officials, these workshops are intended to promote research through cooperation and to encourage participants to share IP, and DOE intends to hold more workshops during the second phase of CERC. DOE also hosted a webcast about IP challenges and opportunities for U.S. organizations doing business in China that it posted on the CERC website. IP guide: CERC developed an IP guide to assist researchers working on CERC projects. This guide provides a broad overview of IP issues and information specific to CERC, such as information related to how to handle the commercial development of inventions that result from CERC research projects. IP experts group: DOE encouraged the establishment of an IP experts group to provide pro bono legal assistance to the CERC program. As of November 2015, the IP experts group had 19 U.S. members and 7 Chinese members. Members of the group reviewed and commented on CERC’s IP guide and are available to answer IP questions for participants on a limited basis. DOE officials said these steps have not eliminated all IP risk but that DOE is focused on preemptive IP protection and education for CERC participants, so that the participants can best protect their own IP interests. CERC participants we interviewed did not report any significant issues with steps DOE has taken to address IP risks. Representatives of 8 of the 12 participating organizations we spoke with about IP issues said the Technology Management Plan was helpful, while others said it had no effect on CERC projects or that they had not had an opportunity to test it. Notably, one participant found the Technology Management Plan helpful in resolving a joint venture negotiation issue. Specifically, the U.S. CERC participant wanted to license technology related to a CERC project to a Chinese company with a nonexclusive license so that it could also license the technology to other companies in China, while the Chinese company wanted an exclusive license to the technology. According to the participant, the Technology Management Plan was helpful in resolving the issue diplomatically and arriving at the desired agreement. Representatives of 9 of the 12 participating organizations said that there was nothing more that DOE could or should do to address IP risks. One participating software company suggested that CERC could further mitigate IP risks by providing software protection technology to participants. Another participating organization suggested that DOE could request that IP terms be summarized in project proposals, so there could be easy access to understanding how each project is managing IP risks. Although CERC participants reported no significant issues with DOE’s approach to managing IP risks, U.S. companies participating in CERC have been reluctant to share background IP as a part of CERC. U.S. CERC participants shared background IP with Chinese organizations for 3 of the more than 80 projects that took place during the first 5-year phase of CERC, according to a DOE survey of CERC tracks about IP completed in December 2015. The seven companies we spoke with regarding IP issues said that they have their own IP protection strategies in place, and several said they generally considered it a risk to share IP with any other companies that are potential competitors. Representatives of three of the companies mentioned that their companies had additional concerns about IP protection related to working in China for reasons such as a perception that the Chinese legal system will not reliably protect their IP rights. For its second 5-year phase, at the direction of higher-level management in DOE and DOE’s counterpart ministry in China, CERC will make an effort to bring more results to market, according to CERC officials. To that end, CERC is planning to focus more on demonstration projects and other projects that are closer to commercialization. A member of the CERC IP experts group said that IP risk is greater once technology is closer to commercialization because companies have invested more in the technology. This greater focus on projects closer to commercialization will continue, and may increase, the importance of sharing background IP during CERC’s second 5-year phase. DOE officials said they would like to encourage more sharing of background IP during CERC’s second 5-year phase and that through demonstration projects there is more likely to be sharing of background IP; however, according to two CERC participants we spoke with, sharing background IP may not be necessary for some demonstration projects. In addition, we found that participants’ willingness to share IP for demonstration projects varies. Specifically, participants in the clean coal track and one participant from the energy efficiency in buildings track said they were interested in demonstration projects and were potentially willing to share, or had shared, IP under CERC. Two participants in the energy efficiency in buildings track said they may be able to demonstrate their products without sharing IP, such as by using technology designed to protect software. However, the representatives of the two companies we spoke with from the clean vehicles track about IP issues said that they were not interested in participating in demonstration projects and that they would not share IP as part of any joint research effort such as CERC. DOE officials acknowledged that companies participating in CERC face a tradeoff between the risks of sharing background IP and the potential benefits, such as valuable research and development outcomes and gaining a market advantage through demonstrating projects in China. These officials also stated that it is appropriate for companies to assess risks for themselves and not share their most valuable IP if the related risk is determined to be too great. Willingness to share background IP is important for valuable research and development collaboration, but researchers would still be able to engage in work that could prove worthwhile if companies or researchers are unwilling to share their background IP under CERC, according to DOE officials. While not much background IP was shared by U.S. CERC participants during CERC’s first phase, U.S. and Chinese CERC researchers exchanged other types of information as inputs to their projects in ways that helped to further their research, according to CERC lead organizations. For example, some of the U.S. and Chinese organizations participating in the clean vehicles track agreed to share battery testing data. Because many batteries must be discharged repeatedly to understand their full life cycle under differing conditions, battery testing can take from months to years; this agreement to share data eliminated months of testing time, according to representatives from the clean vehicles CERC track. Both the United States and China have committed to efforts to address climate change, including doubling their research and development investments on clean energy. The three U.S. government programs we examined—DOE’s CERC, USTDA’s East Asia Program, and State’s CCWG—are among the mechanisms for cooperating with China to make progress in advancing clean energy technologies. CERC and CCWG officials are in the process of planning the next phases of those programs, and USTDA describes itself as an agency that values the role of data in making program decisions. All three programs realized some results as of the end of 2015 and monitor progress toward their goals by employing a variety of tools, such as performance measures and reporting and evaluation systems. However, we found that all three programs and USTDA for its agency-wide performance measures generally lacked targets, which would enable them to compare the results that they have achieved with the results they had planned to achieve. Not having targets linked to program performance measures limits opportunities to identify potential program improvements and managers’ ability to generate and communicate more meaningful performance information. Furthermore, without published agency-wide targets, Congress and the public are unable to compare USTDA’s planned and actual performance, which would help them in providing oversight and ensuring the agency’s accountability. 1. To improve CERC’s performance monitoring, the Secretary of Energy should ensure that for CERC’s second phase the program creates targets and tracks progress against those targets in order to measure program performance. 2. To improve the agency’s performance monitoring, the Director of the U.S. Trade and Development Agency should develop and make public annual targets for the agency’s performance measures. 3. To improve the East Asia Program’s performance monitoring, the Director of the U.S. Trade and Development Agency should ensure that the East Asia Program sets targets for its performance measures and tracks progress against those measures. 4. To improve CCWG’s performance monitoring, the Secretary of State should ensure that CCWG develops measures and targets at the program level and tracks its performance against those measures and targets. We provided a draft of this report for review and comment to DOE, State, and USTDA; the Departments of Agriculture, Commerce, the Interior, and Transportation; the Environmental Protection Agency; the Federal Energy Regulatory Commission; and the U.S. Agency for International Development. In their written comments reproduced in appendices IV, V, and VI, DOE, State, and USTDA, respectively, agreed with our recommendations and noted plans to take action to address them. In addition, USTDA reiterated information about its performance monitoring and evaluation processes that we included in our report, such as the target it set for one of its performance measures on federal prime contracts awarded to small businesses, the value of its U.S. export multiplier, and a description of its monitoring and evaluation processes. Furthermore, USTDA indicated that the agency has a target for the amount of U.S. exports generated in fiscal year 2017. We did not include this information in our report because the annual amount of U.S. exports generated agency-wide is not one of USTDA’s performance measures specified in its strategic plan. Commerce, DOE, State, and USTDA also provided technical comments that were incorporated, as appropriate. The other agencies provided no comments. We are sending copies of this report to the appropriate congressional committees and to the Secretaries of Energy and State; the Director of USTDA; the Secretaries of Agriculture, Commerce, the Interior, and Transportation; the Administrator of the Environmental Protection Agency; the Chairman of the Federal Energy Regulatory Commission; and the Administrator of the U.S. Agency for International Development; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Kimberly Gianopoulos at (202) 512-8612 or gianopoulosk@gao.gov, or John Neumann at (202) 512-3841 or neumannj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. You asked us to review U.S.-China cooperation on clean energy. This report examines (1) how much funding U.S. agencies have obligated to clean energy cooperation with China; (2) what is known about the results of key U.S.-China cooperation programs and the extent to which these programs follow leading practices in performance monitoring; and (3) the extent to which the U.S. Department of Energy (DOE) manages risks that may face U.S. participants in the U.S.-China Clean Energy Research Center (CERC). To determine which types of technologies would be related to clean energy, we looked for a U.S. government definition of the term, but found that the U.S. government has no uniform definition of clean energy that is applied government-wide. Instead, based on consultation with participating agencies and review of the White House’s June 2013 Climate Action Plan, we have determined that the following types of energy technologies are relevant for this review: renewable energy (including solar, wind, hydro, geothermal, and biofuels); energy efficiency technologies (i.e., technologies that decrease the intensity of energy usage); nuclear power; natural gas; clean coal (e.g., coal with carbon capture/sequestration); clean vehicle technologies; and improved energy infrastructure (e.g., smart grids). To describe the funding amounts that U.S. agencies have obligated to clean energy cooperation with China, we first took steps to determine the agencies involved in and providing funding to these efforts. To identify these agencies, we analyzed publicly available information on agency websites and outcome statements from two key annual meetings between the United States and China: the U.S.-China Strategic and Economic Dialogue and the Joint Commission on Commerce and Trade. In addition, when meeting with agencies that we had identified as involved in U.S.-China clean energy cooperation, we asked them which other agencies they worked with in this cooperation. We identified 10 agencies involved: (1) the Department of Commerce, (2) DOE, (3) the Department of the Interior, (4) the Department of State (State), (5) the Department of Transportation, (6) the Environmental Protection Agency, (7) the Federal Energy Regulatory Commission, (8) the U.S. Agency for International Development, (9) the Department of Agriculture, and (10) the U.S. Trade and Development Agency (USTDA). To determine which of these agencies obligated funds to U.S.-China clean energy cooperation and the amounts obligated in fiscal years 2008 through 2015, we sent a data collection instrument to all involved agencies that asked them to identify, among other items, the agencies’ U.S.-China clean energy cooperative activities; a description of each activity, including its purpose and the type(s) of clean energy it focused on; identification of the type of activity (e.g. joint research and development, trade mission, forum for technical discussion, feasibility study, regulatory cooperation, technical assistance, other); other agencies participating in the activity; the amount of funding obligated to each activity by fiscal year; the appropriations funding account used for such obligations; and the source of that information. Upon receipt of the agencies’ responses, we took multiple steps to ensure that responses were complete, including comparing all responses against each other to determine if activities were reported by one agency but not others that also participated in the activity, comparing responses against other agency documentation describing U.S.-China clean energy cooperative efforts, and following up with the agencies to ask for clarifications regarding any activities that seemed that they may have been missing. We also sent each agency a set of questions to help determine the reliability of the sources of the data and to ensure that the agencies considered the information provided as a complete and accurate characterization of their agency’s participation in and funding of U.S.- China clean energy cooperation. Our analysis of these responses showed there were some activities that had been reported but for which the funding was not solely for the purpose of clean energy cooperation with China. For example, funding may have gone to multiple countries or have been also used for other purposes. In cases where agencies were unable to separately identify the funding for clean energy cooperation with China, we excluded those activities and their funding from our analysis. After taking these steps, we determined that the data provided are sufficiently reliable for our purpose of identifying U.S.-China clean energy cooperation efforts and their obligated funding in fiscal years 2008 through 2015. We then analyzed these data by agency, by key programs, by fiscal year, by type of activity, and by type of clean energy to be able to describe the uses of funding provided to U.S.-China clean energy cooperation. To describe what is known about the results of U.S.-China clean energy cooperation, we focused on the programs that received the largest amount of funding from each of the three agencies that provided the most funding to U.S.-China clean energy cooperation. The key programs we identified were DOE’s CERC program, USTDA’s East Asia Program, and State’s U.S.-China Climate Change Working Group (CCWG). Both USTDA’s East Asia Program and State’s CCWG program have some aspects not related to China or clean energy. For our report, we limited our analysis of program results to those aspects of the programs related to clean energy cooperation with China. However, for our analysis of these programs’ performance monitoring, we looked at the whole programs because the same monitoring processes were followed for all aspects of the programs. We analyzed the results of the three key clean energy programs. To describe the results that CERC yielded as of December 31, 2015, for its 19 key performance measures, we discussed with agency officials which performance measures could be aggregated across the years of CERC’s first phase—2011 to 2015. CERC has 12 key performance measures that can be aggregated to show total results for that time period. Of those 12 performance measures, we excluded 3 measures related to funding and cost-share because those are related to inputs and not program results. We determined the results for the remaining 9 performance measures based on agency documentation. To illustrate different types of results, we also selected examples of nonquantifiable key outcomes as reported by CERC for each of the three program tracks from lists of outcomes that DOE considers most important. To describe the export results generated by the 24 USTDA East Asia Program projects in our review as of the end of fiscal year 2015, we analyzed project documents for reported export outcomes and compared those data to information provided by USTDA from its internal database. We calculated the export multiplier for these projects using USTDA’s formula of dividing the total amount of exports by the amount obligated for these projects. We also judgmentally selected examples of USTDA’s development impacts from our review of project documents to illustrate different types of development impacts. To describe the results of the six CCWG initiatives in our review as of the end of fiscal year 2015, we examined the interagency agreements between State and the agencies implementing the initiatives. These agreements establish the performance measures and targets for each initiative. We obtained results data for each performance measure from the agencies’ performance reporting to State. We then aggregated the targets and results across the six initiatives for each performance measure. To assess the reliability of the results data, we reviewed agency documents, including an external audit of one agency’s data system; to the extent possible, cross-checked results information that was reported in multiple documents; and interviewed agency officials regarding how they validate their data. We determined that the data are sufficiently reliable for describing the results of USTDA’s East Asia Program and CCWG through fiscal year 2015 and CERC’s results through December 2015. To describe how DOE, USTDA, and State monitor the performance of CERC, the East Asia Program, and CCWG, respectively, we analyzed documents from each agency such as evaluation manuals, contracts, and performance reports. We also interviewed knowledgeable officials from each agency to discuss their processes for monitoring program performance. The GPRA Modernization Act of 2010 (GPRAMA) requires agencies to have a performance plan that, among other things, contains performance measures with established targets that can be used to assess progress toward achieving those targets. GPRAMA also requires that agencies make public their performance plans containing information on their goals and performance measures. Furthermore, linking goals to performance measures that are tracked against established targets is a leading practice for federal programs. To examine whether the three agencies followed these leading practices to measure their programs’ performance, we reviewed agency planning and performance reporting documents to determine whether they contained goals, performance measures, and targets. We then confirmed our analysis by meeting with officials from each program to discuss whether the programs had goals linked to performance measures with established targets, as well as whether USTDA had these elements agency-wide. We conducted fieldwork at the locations of the CERC lead organizations in West Virginia, Michigan, and California in November and December 2015 to interview representatives of the CERC lead organizations, other CERC participants, and CCWG participants to collect information on results and reporting. Also, during these interviews with CERC participants and others conducted by phone, we discussed the potential benefits and risks of participating in CERC and DOE’s management of risks. In addition to management and researchers at the three CERC lead organizations, we interviewed representatives from a nonprobability sample of 10 other organizations that have participated in CERC. Specifically, we interviewed representatives of three participants in the clean vehicles track, four participants in the clean coal track, and three participants in the energy efficiency in buildings track. These participants included eight private companies, one university, and one national lab that participate, or have participated, in CERC. One private company did not comment on IP because it did not participate in a research project with China, so that company is not included in our analysis of IP issues. We selected these CERC participants using criteria such as type of organization, current or former participant, amount of involvement in CERC, and proximity to a CERC lead organization we were visiting. Because we selected a nonprobability sample, the information obtained from these interviews is not generalizable to other CERC participants, but it provides illustrative information. During the site visits, we also interviewed four CCWG participants to learn more background on this program. These participants were selected based on proximity to a CERC lead organization we were visiting. For CCWG, we also interviewed officials from the agencies implementing all six of CCWG’s clean energy initiatives through 2015, including DOE, the Department of Transportation, and the Environmental Protection Agency. To determine what risks, if any, U.S. companies and researchers participating in CERC may face, we analyzed relevant documents and conducted multiple sets of interviews. We reviewed documents from the Office of the United States Trade Representative and the U.S. Patent and Trademark Office that describe IP issues and other risks related to doing business in China. In addition to interviewing CERC participants and DOE officials, we interviewed eight individuals we learned to be knowledgeable about U.S.-China cooperation on clean energy to get their perspective on the potential IP risks for CERC participants and, in some instances, the steps DOE has taken to address those risks. To determine the extent to which DOE has taken steps to manage these risks, we identified the risk management steps DOE has taken and then compared these steps to federal internal control standards for risk assessment, which state that agencies should assess the risks the agency faces from both internal and external sources and decide how to manage those risks and what actions should be taken. Specifically, we analyzed relevant documents, including the CERC Protocol and IP Annex, the Technology Management Plan for each CERC track, the CERC “Researchers’ Guide to IP and Technology Transfer,” and the results of a CERC IP survey. We directed clarifying questions about the IP survey to DOE officials and a CERC participant and determined that the survey information is accurate and reliable for our purposes. We also interviewed knowledgeable DOE officials to understand what steps DOE took to identify and respond to any risks that U.S. participants in CERC may face, and we interviewed CERC participants to get their feedback on the effectiveness of these steps. We conducted this performance audit from June 2015 to July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The seven U.S. agencies that obligated funding to U.S. participation in bilateral cooperation with China on clean energy in fiscal years 2008 through 2015 did so using funding from a variety of appropriations accounts. Table 3 provides a list of those accounts. The following figures show the organizational and reporting relationships for the three key U.S.-China clean energy programs: the Department of Energy’s U.S.-China Clean Energy Research Center (CERC), the U.S. Trade and Development Agency’s (USTDA) East Asia Program, and the Department of State’s U.S.-China Climate Change Working Group (CCWG). Figure 5 shows how CERC fits within the Department of Energy, including where CERC tracks get their funding, responsibilities for program oversight, and performance reporting channels. Figure 6 shows how the USTDA East Asia Program fits within USTDA, including where the East Asia Program’s projects get their funding, responsibilities for program oversight, and data analysis and performance reporting channels. Figure 7 shows how CCWG fits within the Department of State, including where the CCWG initiatives get their funding, responsibilities for program oversight and strategic direction, and performance reporting channels. In addition to the contacts named above, Kim Frankena (Assistant Director), Karla Springer (Assistant Director), Heather Latta (analyst in charge), David Dayton, Rachel Girshick, Marya Link, and Dina Shorafa made key contributions to this report. Jill Lacey, Lisa Pearson, Sara Sullivan, and Alex Welsh provided technical assistance.
The United States and China lead the world in energy consumption, and both are investing in renewable resources and efforts to increase the efficiency of traditional fossil fuel sources in part to address climate change. In 2014, a congressional commission raised questions about bilateral cooperation between the United States and China on clean energy, including potential IP risks to U.S. participants involved in collaborative research projects. GAO was asked to review government-led U.S.-China collaborative initiatives on clean energy. This report examines (1) how much funding U.S. agencies obligated for clean energy cooperation with China, (2) what is known about the results of key programs and the extent to which they follow leading practices in performance monitoring, and (3) the extent to which DOE managed risks that CERC participants may face. GAO analyzed funding data, reviewed documents and compiled reported results, interviewed agency officials and participants of key programs, and conducted site visits. In fiscal years 2008–2015, U.S. agencies obligated a total of about $97 million for clean energy cooperation with China. Two-thirds of this money was obligated for three key programs (projects of which are depicted from left to right below): Department of Energy (DOE) program, the U.S.-China Clean Energy Research Center (CERC), that has focused on research and development in clean coal, clean vehicles, and energy efficiency in buildings; U.S. Trade and Development Agency (USTDA) program focused on export promotion through projects such as feasibility studies and trade missions; and Department of State (State) program that includes information sharing and technology demonstration projects across various clean energy technologies. The key programs have yielded some results and have performance monitoring tools but generally lack targets for their performance, making the significance of their progress unclear. Examples of the programs' results include: for CERC, as of the end of 2015, the launch of 15 products, such as software for enhancing energy efficiency of buildings; and for the USTDA program, through fiscal year 2015, about $230 million in U.S. exports from its clean energy projects. Based on performance monitoring principles in the GPRA Modernization Act of 2010, it is a leading practice for federal programs to link goals to performance measures with established targets. Without targets, it is unclear how results compare with intended performance and what improvements may be needed; this is particularly important as DOE and State officials are planning the next phases of their programs and USTDA emphasizes the role of data in program decisions. DOE identified intellectual property (IP) risks CERC participants may face, such as participants not having a clear plan for protecting IP, and took steps to manage them. These steps included requiring agreements clarifying IP rights and providing training, in part to encourage participants to share IP created outside of CERC projects. DOE officials said this IP sharing is important for valuable research and development collaboration. CERC participants GAO spoke with reported no significant issues with DOE's management of IP risks but, nonetheless, have been reluctant to share IP. DOE officials acknowledged that participants face a tradeoff between the risks and benefits of sharing IP with Chinese participants and that it is appropriate for companies to assess risks for themselves. GAO is making four recommendations to enhance performance monitoring, including that DOE, USTDA, and State each develop targets for program-level performance and track progress against them for the key programs GAO reviewed. The agencies agreed with GAO's recommendations and plan to take actions to address them.
Flooding disasters of the 1920s and 1930s led to federal involvement in protecting life and property from flooding, with the passage of the Flood Control Act of 1936. Generally, the only available financial recourse to assist flood victims was postdisaster assistance. When flood insurance was first proposed in the 1950s, it became clear that private insurance companies could not profitably provide flood coverage at a price that consumers could afford, primarily because of the catastrophic nature of flooding and the difficulty of determining accurate rates. In 1965, Congress passed the Southeast Hurricane Disaster Relief Act that provided financial relief for victims of flooding and mandated a feasibility study of a national flood insurance program, which helped provide the basis for the National Flood Insurance Act of 1968 that created NFIP. FEMA, through its Federal Insurance and Mitigation Administration (FIMA), manages the flood insurance program and sells and services NFIP policies primarily through private insurance companies in the Write- Your-Own (WYO) program. Program participation takes place at the community and the property-owner level. To participate, communities must adopt and enforce land use and control measures to mitigate the effects of flooding on new or existing homes in SFHAs. Participation for property owners (borrowers) in participating communities initially was voluntary; however, due in part to low levels of borrower participation; Congress established the mandatory purchase requirement for properties located in SFHAs in participating communities. Regulated lending institutions and loan servicers acting on their behalf must ensure that borrowers (1) purchase flood insurance for any building or a mobile home located or to be located in an SFHA in a participating community at the time a mortgage loan for that property is obtained, or at the time of increasing, extending or renewing the loan, and (2) maintain the insurance through the life of the loan, or add it if the property is remapped into an SFHA and the borrower is then required to purchase flood insurance under the mandatory purchase requirement. Regulated lending institutions may not make, increase, extend, or renew any loan secured by improved real estate or a mobile home located or to be located in an SFHA community participating in NFIP unless the building or mobile home and any personal property securing such loan is covered by flood insurance for the term of the loan, subject to certain exceptions. Although FDPA did not define “flood insurance” to specifically include or exclude private insurance, guidance issued by the Federal Insurance Administration in 1974 stated that private flood insurance policies could satisfy the mandatory purchase requirement as long as they met specified criteria. The National Flood Insurance Reform Act of 1994 (1994 Act) further amended the statutory framework for the mandatory purchase requirement and its enforcement by federal regulators. The 1994 Act directed that federal regulators issue regulations to require regulated lending intuitions to notify borrowers in SFHAs about the availability of flood insurance coverage under NFIP, and that such coverage is also available from private insurers. Regulations implementing the 1994 Act require that regulated lending institutions mail or deliver a written notice to the borrower and servicer before completion of the transaction. The notice must contain the following information: a warning that the building or the mobile home is or will be located in a description of the flood insurance purchase requirements under FDPA, as amended; a statement, where applicable, that flood insurance coverage is available under NFIP and also may be available from private insurers; and a statement whether federal disaster relief assistance may be available if flooding (from a federally declared disaster) causes damage to the building or mobile home. The 1994 Act also requires regulators to assess civil monetary penalties against regulated lending institutions found to have a pattern or practice of violating certain federal flood insurance requirements. It also required Fannie Mae and Freddie Mac to implement procedures reasonably designed to ensure that any mortgage they purchase (secured by improved real estate or a mobile home in an SFHA) has flood insurance. More recently, in 2012, the Biggert-Waters Act reauthorized NFIP through September 30, 2017, and made some significant changes to FDPA. For example, the Biggert-Waters Act increased the maximum penalties that may be assessed against a regulated lending institution. The Biggert- Waters Act also required regulators to issue regulations directing lending institutions to accept private flood insurance (as defined by the act) and modified the required notification of borrowers of the availability of private flood insurance. In 2013, the federal regulators issued a joint notice of proposed rulemaking to implement certain provisions of the Biggert-Waters Act, including the modified notification and required acceptance of private flood insurance. In July 2015, the regulators issued a final rule implementing the modified notification provision and certain other provisions from the Biggert-Waters Act and HFIAA. The July 2015 final rule did not include implementation of the Biggert-Waters Act’s requirements on private flood insurance; the agencies noted that they planned to address those requirements in a separate rulemaking. See figure 1 for a summary timeline of selected federal flood insurance legislation. Insurance is primarily regulated by the states, unless federal law specifically relates to the business of insurance (as in the cases of flood and terrorism insurance). Federal regulators generally have authority over the activities of regulated lending institutions that relate to the flood insurance requirements under FDPA. The Federal Reserve, FDIC, OCC, NCUA, and FCA are the regulators responsible for overseeing the federal flood insurance requirements, including the mandatory flood insurance purchase requirement, for their institutions (see table 1). Requirements and processes for regulating insurance may vary from state to state, but state regulators generally license insurance companies and agents, review insurance products and premium rates, and examine insurers’ financial solvency and market conduct. According to NAIC, state regulators monitor an insurers’ compliance with laws and regulations, and a company’s financial condition through solvency surveillance and examination mechanisms. Insurance regulators use insurance companies’ financial statements and other information as part of their continuous financial analysis, which is performed at least quarterly, to identify issues that could affect solvency. State insurance regulators typically conduct on-site financial solvency examinations every 3–5 years, although they may do so more frequently for some insurers, and may perform additional examinations as needed. Through NAIC, the regulators also collect financial information from insurers for ongoing monitoring of financial solvency. Admitted insurers can sell insurance in one or more states but must be licensed to operate in every state in which they sell coverage. Admitted insurers can be licensed to sell several lines or types of coverage to individuals or families, including personal lines (such as homeowners, renters, and automobile insurance) and commercial lines (such as general liability, commercial property, and product liability insurance). Their activities are regulated primarily by the states. State regulators require admitted insurance companies to maintain specific levels of capital to continue to conduct business. Admitted insurers provide coverage for numerous risks, but they may not be willing to cover some risks. These include risks that are difficult to assess, occur too frequently to be acceptable to admitted insurers, are specialized or unusual, or require coverage that exceeds the capacity of admitted carriers. In these cases, potential insureds may turn to the nonadmitted market. The nonadmitted market offers insurance products for risks for which coverage is unavailable in the admitted market. Among the nonadmitted insurers are surplus lines insurers. These insurers provide coverage for general, management, and professional liabilities and commercial, automobile, environmental, and property risks, among other things, and tailor their products to meet the needs of the insured. For example, they may write policies to cover personal lines of insurance such as homeowners insurance in flood-prone areas. In 2014, the total surplus lines direct premiums written in the U.S. market totaled more than $40 billion. While admitted insurers must be licensed in the states in which they sell insurance, surplus lines insurers must be licensed in only one state but may sell in others in which they are not licensed, provided they are eligible according to the states’ surplus lines laws. Surplus lines insurers also must sell their insurance through a state-licensed broker. The brokers typically represent insurance buyers and place coverage with surplus lines insurers. According to NAIC, on a surplus lines placement, the insurance regulator of the policyholder's home state has authority over the placement of the insurance by a surplus lines broker and can sanction the surplus lines broker, revoke their license, and hold them liable for the full amount of the policy. To place coverage in the surplus lines market, brokers must follow state due diligence requirements. Although they vary from state to state, according to an association representing surplus lines insurers and brokers, these requirements generally call for a “diligent search” of the admitted market before turning to a surplus lines insurer. The diligent search generally requires brokers to establish that three admitted companies licensed to write the kind and type of insurance being requested had declined to provide it. All eight of the lenders with whom we spoke stated that they accept or would accept private flood insurance as satisfaction of the mandatory purchase requirement. Moreover, all but one stated that they have policies and procedures that use one or more sources of guidance when determining if a private policy satisfies the mandatory purchase requirement. All but one of the lenders referenced the criteria in FEMA’s (now rescinded) Mandatory Purchase of Flood Insurance Guidelines (Guidelines) as one set of guidance—the Guidelines provided lenders with six criteria to evaluate private flood insurance policies for compliance with the mandatory purchase requirement. The Guidelines also provided that to the extent the policy differs from the NFIP standard policy, the differences should be examined before the policy is accepted. (See table 2 and app. II for more detailed information on the criteria.) Half of the lenders with whom we spoke also referenced the July 2009 Interagency Questions and Answers Regarding Flood Insurance (2009 Questions and Answers) provided by the federal regulators, which cite FEMA’s Guidelines as criteria for evaluating private flood insurance policies. One lender cited the requirements of Fannie Mae and Freddie Mac as a primary source of guidance. The same lender also added that when working with loans insured by FHA and VA, they use their requirements and guidance on the use of private flood insurance. FEMA issued Guidelines in 1989 to promote greater uniformity and understanding of the requirements of mandatory purchase provisions among federal regulators of lenders, government-sponsored enterprises (such as Fannie Mae or Freddie Mac), federal agency lenders, and applicable lending institutions. In February 2013, FEMA rescinded the Guidelines, stating that the six elements listed in the guidance were intended to provide assistance to regulated lending institutions in determining the acceptability of private flood insurance and were not meant to be exclusive. Furthermore, FEMA stated, as it had in the Guidelines, that it had no authority to rule on the acceptability of private insurance policies. However, federal regulators told us that until they finalize regulations requiring regulated lending institutions to accept private flood insurance under the Biggert-Waters Act, regulated lending institutions should continue to use the 2009 Questions and Answers as guidance when determining if a private flood insurance policy satisfies the purchase requirement. In these Questions and Answers, the regulators state that a private flood insurance policy may be an adequate substitute for an NFIP policy if it meets the criteria set forth by FEMA in its Guidelines. The 2009 Questions and Answers also state that regulated lending institutions may rely on a private policy that does not meet the FEMA criteria only in limited circumstances. Officials for each of the regulators acknowledged that the 2009 Questions and Answers cite the rescinded FEMA Guidelines but stated that the overall guidance remains in effect. They noted that regulated lending institutions continue to have discretion in determining whether a private flood insurance policy meets regulated lending institutions’ obligations under the mandatory purchase requirement. Furthermore, in their interagency statement as well as their 2013 joint notice of proposed rulemaking, the regulators stated that the Biggert- Waters Act provision requiring the acceptance of private flood insurance, as defined by the act, will be implemented through rulemaking. Both documents noted that the regulators considered this provision not effective until final regulations are issued. The 2013 joint notice of proposed rulemaking further stated that regulated lending institutions currently continue to have the discretion to accept flood insurance issued by private insurers. According to federal regulators, while they are working on finalizing these regulations, they currently do not have a timetable for issuing final regulations implementing these requirements. According to most of the lenders with whom we spoke, each exercises some level of discretion when determining whether a private policy meets the guidance they currently follow. For example, two lenders stated that if a private policy does not exactly match the FEMA criteria, they use their judgement in determining if the policy provides sufficient protection under the law. Four lenders stated that if a policy does not meet the criteria they may determine not to accept the policy or ask the borrower to obtain a policy that more closely meets the criteria. Most lenders described currently having procedures for evaluating a private flood insurance policy that generally require greater judgement and resources than required when the borrower presents an NFIP policy. For example, half of the lenders stated that procedures for assessing a private flood insurance policy generally include obtaining the entire policy and evaluating it against the FEMA Guidelines. In contrast, one lender stated that the company only obtains the NFIP policy declaration page. A representative of another lender said the company has specialists to conduct reviews of private flood insurance policies and a centralized flood insurance function that reviews NFIP policies. Finally, according to most of the lenders with whom we spoke, they deal with a relatively small number of private policies. For example, while most of the lenders did not specifically track data on the use of private flood insurance, six of the eight lenders roughly estimated that less than 5 percent of the residential mortgages in their portfolios that require flood insurance were insured by a private policy. While stakeholders with whom we spoke did not have data on the use of private flood insurance, they generally agreed that most private flood insurance written is in the surplus lines market for commercial properties or for coverage in excess of what NFIP offers. We were not able to identify any currently available information on the amount of flood insurance written by admitted carriers. According to NAIC officials, NAIC has begun to collect information on private flood insurance from various associations to inform state insurance regulators of the current interest and activity in the private market for flood insurance. NAIC also has been developing a requirement for insurance companies to include a line item in their annual financial statements that highlights their private flood insurance activity. According to all of the lenders we interviewed, their notices to borrowers include language that encourages the comparison of NFIP and private insurance policies. More specifically, they said they use sample language provided in the final rule that implemented Biggert-Waters Act changes to the notification requirement, or something generally similar. (The statutory requirements and the sample language are described in more detail below.) All of the lenders with whom we spoke also stated that other than providing the required notice to borrowers, they generally did not communicate with borrowers about flood insurance, either private or NFIP. A few lenders stated that borrowers likely make insurance decisions with their insurance agent. As noted previously, the Biggert-Waters Act amended the notification requirement in the 1994 Act to revise and add information pertaining to private flood insurance. The Biggert-Waters Act requires that regulated lending institutions disclose to a borrower that flood insurance is available from private insurance companies that issue standard flood insurance policies on behalf of NFIP or directly from NFIP; flood insurance that provides the same level of coverage as a standard flood insurance policy under NFIP may be available from a private insurance company that issues policies on behalf of the company; and the borrower is encouraged to compare the flood insurance coverage, deductibles, exclusions, conditions, and premiums associated with flood insurance policies issued on behalf of NFIP and policies issued on behalf of private insurance companies and to direct inquiries on the availability, cost, and comparisons of flood insurance coverage to an insurance agent. In July 2015, the federal regulators issued a final rule that provides that the revised and additional disclosures be included in the notification to borrowers. The regulators also amended an existing sample form (Sample Form of Notice of Special Flood Hazards and Availability of Federal Disaster Relief Assistance) with language to reflect the additional disclosures (see app. III). Since the sample form was first provided in 1996, regulations have provided that a regulated lending institution will be considered in compliance with the notice requirement if the regulated lending institution provides a written notice to the borrower containing the language found in the sample form (within a reasonable time before the completion of the transaction). Our review of the regulations issued to implement the various flood insurance statutes found that since the passage of the 1994 Act, as amended, the federal regulators have issued joint regulations implementing most of the statutory requirements. Most recently, in July 2015 the regulators issued final rules implementing numerous provisions of the Biggert-Waters Act and the Homeowner Flood Insurance Affordability Act of 2014 (HFIAA) related to the escrow of flood insurance payments on residential properties, exemptions from the mandatory purchase requirement, and, as stated earlier, revised disclosures in the notification to borrowers, among other provisions. The regulators did not implement the Biggert-Waters Act provision requiring the acceptance of private flood insurance, as defined by the act. The federal regulators in our review—FDIC, Federal Reserve, NCUA, OCC, and FCA—also have developed consistent examination guidance for regulated lending institutions’ activities related to flood insurance. All five of the federal regulators currently responsible for lending regulation use risk-based examinations in their oversight of their institutions. When examining institutions, each has procedures for reviewing a regulated lending institution’s compliance with the requirements of the National Flood Insurance Act, as amended. To assist examiners in executing these examinations, each federal regulator has developed a general examination manual that details examination policies and procedures as well as flood insurance examination procedures that were developed on an interagency basis under the direction of the Federal Financial Institutions Examination Council. According to federal regulators’ examination guidance, an examination team is to review a regulated lending institution’s policies, procedures, and controls for ensuring compliance with the flood insurance requirements. Two federal regulators stated and the other three regulators’ examination guidance states that examiners generally review a sample of loans to determine compliance with regulatory requirements, potentially including flood insurance requirements. We reviewed the examination manuals and flood insurance modules or checklists for each of the federal regulators and found that the relevant flood insurance portions of each manual were substantially similar and addressed the flood insurance-related requirements for the regulated lending institutions. For example, all five regulators’ modules or checklists include, reviewing loan documentation to ensure that a flood zone determination was made, notices were provided to the borrower and servicer, premiums were properly escrowed, and the property was covered by an insurance policy in the appropriate amount. Based on our review, the flood insurance examination procedures did not differentiate between private flood insurance policies and NFIP policies. The focus of the manuals was to assist examiners in assessing regulated lending institutions’ compliance with flood insurance requirements, including notice and mandatory purchase of flood insurance. Stakeholders cited regulatory uncertainty, such as the lack of final implementing regulations and the scope of the definition of private flood insurance in the Biggert-Waters Act, as some of the potential barriers lenders may face in fulfilling their responsibilities regarding private flood insurance and the mandatory purchase requirement. Stakeholders also cited recent changes made by FEMA to certain NFIP policies as potential barriers to consumers’ use of private flood insurance. Finally, stakeholders identified certain market challenges as barriers to increased use of private insurance. Congress has been considering legislation to address some of the concerns raised by stakeholders and some states have enacted legislation to make private flood insurance more accessible. Stakeholders cited the lack of final implementing regulations, the scope of the Biggert-Waters Act definition of private flood insurance, and limited insurance expertise as potential barriers lenders face to evaluating and accepting private policies in satisfaction of the mandatory purchase requirement. The Biggert-Waters Act requires regulators to issue regulations requiring that regulated lending institutions accept private flood insurance, as defined in the statute, to meet the mandatory purchase requirement; however, regulators have yet to issue the final regulations to make this provision effective. Lack of final rules. Various stakeholders with whom we spoke cited the lack of final rules as creating regulatory uncertainty among lenders and insurers about the use of private flood insurance to satisfy the mandatory purchase requirement. For example, one association noted that while a law has been passed requiring regulated lending institutions to accept private flood insurance, without final rules implementing this requirement, it is unknown what the regulations will be and how lenders and private insurers will comply. Officials from two lenders also stated that the lack of final regulations and the lenders’ reliance on FEMA guidance puts them in an uncertain and uncomfortable position, as this guidance has been rescinded and is not current. Furthermore, according to a private insurer with whom we spoke, there is uncertainty about whether private policies that lenders have accepted as satisfying the mandatory purchase requirement, may retroactively be deemed noncompliant if they do not meet the requirements of the new regulations, after they are finalized. This insurer added that the issuance of final rules on the acceptance of and definition of private flood insurance would lead to more clarity. They added that public acceptance and use of private flood insurance also may increase. Finally, a state insurance regulator and federal regulator noted that the lack of final rules has led to uncertainty among lenders about which private policies should be accepted. Scope of definition and flexibility. Stakeholders we interviewed, as well as many comment letters in response to the regulators’ October 2013 proposed rules on private flood insurance, expressed concerns about the scope of the definition of private flood insurance in the Biggert-Waters Act, also discussed later in this section. They also noted the need for regulated lending institutions to retain some discretion when evaluating private flood policies. The Biggert-Waters Act codified the rescinded FEMA criteria (which the regulators and lenders considered as guidance), into law as the definition of private flood insurance. See app. II for a side- by-side comparison of the FEMA criteria and the Biggert-Waters Act definition. According to some stakeholders, to be compliant with the Biggert-Waters Act provisions, regulated lending institutions only may be able to accept private policies meeting all the criteria in the law. They added that evaluating private policies to meet all elements of the statutory definition would reduce the discretion regulated lending institutions currently have when evaluating and accepting private policies. As discussed previously, some lenders stated that they currently use their judgement in determining whether to accept a private policy even if it does not meet all the criteria but still covers their risk. Furthermore, two lenders stated that, under the Biggert-Waters Act requirement, their evaluation of private policies would focus on compliance with the statutory definition, as opposed to focusing on ensuring the policies met the lender’s own risk requirements. These lenders, as well as associations and lenders that submitted comment letters, emphasized the need for regulated lending institutions to have a certain level of flexibility and discretion in evaluating and accepting policies. Importance of guidance. Comment letters and representatives of banking trade associations with whom we spoke also stated their desire for additional guidance on how to evaluate private flood insurance policies once the final rule is implemented. Some stakeholders want additional guidance because regulated lending institutions could face civil monetary penalties (for noncompliance) for policies that did not meet the statutory criteria. A few lenders we interviewed and comment letters to the proposed rules also noted that additional guidance that clearly states what is required of private flood insurance policies to comply with the Biggert-Waters Act would make lenders more comfortable in reviewing and accepting such policies. Citing a need for clear and comprehensive guidance, several stakeholders urged the federal regulators and FEMA to work together to update and reinstate FEMA’s rescinded Guidelines to reflect changes from the Biggert-Waters Act. In addition, in their comment letters, a few stakeholders noted that additional guidance would help promote uniformity across mortgage lenders and the private flood insurance industry. Lender expertise. Some lenders and other stakeholders from the insurance and banking industries stated their concern about evaluating policies for compliance with one criterion in particular: that private flood insurance provide coverage be “at least as broad as” the coverage NFIP offers. A lender, two insurers, FEMA officials, and a regulatory official all noted that matching the elements of a private policy to an NFIP policy would be difficult. For example, FEMA officials said that NFIP’s policy terms and rates do not reflect prior flood losses and that private insurers likely would not be able to develop their rates without taking prior losses into consideration. Two private insurers told us that in their experience, some lenders find it difficult to compare private policies with NFIP policies. One insurer added that if a lender was not familiar with private flood insurance, the company would need to educate the lender on their policy’s coverage and show how it matched that of an NFIP policy. More generally, some lenders and banking associations with whom we spoke, as well as many comment letters on the proposed rule, noted that lenders might not have the expertise to evaluate private flood policies against the statutory criteria. In addition, a FEMA-commissioned study on privatization concluded that an increased use of private insurance would necessitate increased resources on the part of regulated lending institutions to assess whether private policies complied with the mandatory purchase requirement and met their needs. In their October 2013 proposed rule, the federal regulators acknowledged concerns about the ability of regulated lending institutions to evaluate whether private policies met the statutory definition because some regulated lending institutions lacked the necessary technical expertise. Recognizing that regulating insurance is generally the domain of state regulators, the regulators suggested that the state regulators might be the appropriate parties to determine whether a flood insurance policy met all the criteria in the statutory definition of private flood insurance. Therefore, the proposed rule includes a safe harbor to allow regulated lending institutions to rely on a written determination of the state insurance regulator that a policy issued by a private insurer met the federal statutory definition of private flood insurance. Concerns about state interpretations. While some comment letters and stakeholders with whom we spoke acknowledged that state insurance regulators have authority and experience in approving and overseeing property/casualty insurance, and can do the same with private flood insurance, some also noted challenges with the proposed safe harbor. Specifically, in their comment letters, NAIC and other stakeholders questioned the authority and ability of state insurance regulators to be solely responsible for interpreting or enforcing federal law generally, and in particular, the federal statutory definition of private flood insurance. Representatives of one state insurance regulator with whom we spoke noted that even with their experience and regulatory structure for approving insurance policies, state regulators would find it challenging to determine if private policies were at least as broad as NFIP policies. FEMA’s report on privatization acknowledges that with greater use of private flood insurance, state insurance regulators would have an increased role and need to become more familiar with how flood insurance is modeled, priced, and managed. The FEMA report also noted that the involvement of different state regulators could result in many different regulatory systems. Some comment letters also raised this concern, and added that varying state laws could result in inconsistencies in states’ determinations of acceptable flood insurance policies. Due to the challenges with the statutory definition of private flood insurance described earlier, comment letters from some lenders and banking associations, and a few lenders with whom we spoke, noted the need for an additional option for a safe harbor provision (that is, in addition to the one discussed in the proposed rule), which they could rely on for approving private policies as meeting the statutory criteria. They suggested that insurance companies provide their own written determination that their policy complies with the definition of private flood insurance in the Biggert-Waters Act. One private insurer we interviewed has been asked by lenders to attest in writing that its flood insurance policy meets the requirements in the Biggert-Waters Act. Stakeholders cited certain changes made by FEMA to NFIP policies on flood insurance, which may affect consumers financially, as potential barriers to consumers’ ability or willingness to switch from NFIP and use private flood insurance to satisfy the mandatory purchase requirement. Continuous coverage. NAIC and stakeholders cited FEMA’s interpretation of the continuous coverage requirement in connection with private flood insurance and the effect on consumers’ ability to qualify for NFIP discounted rates. Effective April 1, 2016, FEMA prohibits the use of discounted rates (subsidized or “grandfathered” rates) for policies when there has been a lapse in NFIP coverage of more than 90 days (see fig. 2). FEMA officials noted that excluding non-NFIP policies from constituting continuous coverage was due to FEMA’s interpretation of the HFIAA provision on policy lapses. Some stakeholders, including private insurers, noted that FEMA’s decision to exclude private flood insurance policies from constituting continuous coverage could have financial repercussions for consumers seeking to reinstate their previously discounted NFIP coverage (after using a private policy to insure their properties from flood losses). So, if a NFIP policyholder who qualified for subsidized rates switched to a private flood policy, and then switched back to an NFIP policy, the policyholder would no longer qualify for subsidized rates and would be charged full- risk rates based on the elevation of the property. A single-family primary residence with $200,000 in building coverage and $50,000 in contents coverage would pay a subsidized premium of $2,880, but their full-risk rate would be based on elevation. For example if this building’s elevation was 1 foot below base flood elevation, the full-risk premium would be $4,650. If it was 2 feet below, the full-risk premium would be $6,890. Two private insurers and an association representing insurers told us that due to the risk of losing their discounted NFIP rates, consumers may avoid the private market as an option to insure against flood losses. Narrowed opportunities for premium refunds. According to NAIC and the private flood insurance companies with whom we spoke, another recent regulatory change—a FEMA guidance revision related to policy cancellations—could discourage the use of private flood insurance by consumers. FEMA allows NFIP coverage to be terminated in accordance with its stated cancellation reasons. If coverage is terminated for a valid cancellation reason, the insured may be entitled to a full or partial refund of the paid NFIP premium. Effective November 1, 2015, FEMA no longer allows policyholders to cancel their NFIP policy and obtain a refund if they obtained a non-NFIP policy (private flood insurance policy). However, FEMA had allowed for refunds under this scenario previously (see fig. 3). According to NAIC officials, if the reason for cancellation of an insurance policy is the request of the policyholder, the policyholder generally would receive a refund of the paid premium, which may be less than the prorated premium, if stipulated by the insurance policy. Two organizations that represent private insurers also stated that policyholders generally would receive a refund of their paid premium when cancelling a policy after the effective date of the policy. FEMA officials told us that this change came about due to a review of cancellation allowances conducted to incorporate any necessary changes from flood insurance legislation (Biggert-Waters Act and HFIAA). Specifically, FEMA officials said that FEMA only can allow cancellation of policies and refunds according to the terms and conditions of NFIP’s Standard Flood Insurance Policy (SFIP). They stated that while the NFIP policy allows for the cancellation of a policy and refund of a premium, on a prorated basis, when a duplicate NFIP policy is obtained on the same property, there is not a similar provision when a non-NFIP policy is obtained. FEMA did not provide any additional analysis or offer further explanation for this change. However, FEMA could have taken steps to revise the SFIP to allow for such refunds. Allowing this type of refund would be in line with industry practice to allow for refunds of paid premiums as well as Congress’s interest in transferring some of the federal government’s exposure to flood insurance risk to the private sector. According to most of the private flood insurers with whom we spoke, this change to the cancellation policy further complicates the situation for consumers who wish to opt for private flood insurance when coupled with the requirement that regulated lending institutions establish escrow accounts for flood insurance premiums and fees for loans made after January 1, 2016. Generally, according to some insurers with whom we spoke, lenders will use escrow accounts to pay for flood insurance from 30 to 90 days in advance of the effective date of insurance policies. The combination of factors may present a challenge to consumers because they may not know when escrowed payments are made or because of the limited time frame in which they could cancel their NFIP policies to avoid forfeiting their premium, which could result in the following scenarios. According to one private insurer, policyholders may not be aware that the bank already renewed their NFIP policy and made the payment from the escrow account. Because FEMA will no longer refund premiums to policyholders who switch to a non-NFIP policy, consumers must cancel their NFIP policy in advance of their NFIP renewal date to avoid forfeiting their premium. A representative of another private insurer added that consumers who wish to obtain a private insurance policy must consider making this change well in advance of the NFIP policy effective date (or renewal date) and that generally consumers do not address their insurance needs that far in advance or realize when their insurance has been paid from their escrow account. Many stakeholders with whom we spoke said that low private-sector participation in the flood insurance market also was due to certain market challenges that presented barriers to the private sector such as NFIP’s discounted rates, a topic on which we reported previously and also have ongoing work. For example, in our January 2014 report, we noted that according to stakeholders, insurers needed to be able to charge premium rates that reflect the full risk of potential flood losses, but with NFIP charging rates that were not actuarially sound, private companies found it difficult to compete in the market. Stakeholders with whom we spoke for this report (including insurance trade associations, several state insurance regulatory agencies, and another organization with flood insurance expertise) also noted this concern. Several stakeholders, including private insurers, and NAIC emphasized that having to compete with NFIP’s discounted rates is one of the primary barriers to increased use of private flood insurance rather than any regulatory issues. Similarly, our review of literature related to the involvement of the private sector in flood insurance found that the ability to charge actuarially sound rates was a strong factor in market participation. Many stakeholders (including insurance trade associations, a private insurer, and officials from two state regulatory agencies) also noted the lack of access to NFIP data on flood losses and claims as a barrier to more private companies offering flood insurance. Stakeholders added that access to such data would allow private insurance companies to better estimate losses and price flood insurance premiums. In our previous work, stakeholders told us that access to NFIP policy and claims data would help private insurers assess flood risks and determine which properties they might be willing to insure. However, in our previous report we also noted that according to FEMA officials, the agency would need to address privacy concerns to provide property-level information to insurers, because the Privacy Act prohibits the agency from releasing detailed NFIP policy and claims data. The Privacy Act governs how federal agencies may use the personal information that individuals supply when obtaining government services or fulfilling obligations. FEMA officials said that while the agency could release data in the aggregate, some information could not be provided in detail. For example, FEMA could provide ZIP code-level information to communities but would need to determine how to release property-level information while protecting the privacy of individuals. FEMA officials added that recent data breaches in the federal government’s information technology controls have led to additional concerns about the secure storage and sharing of NFIP data. According to NAIC officials, after receiving a request from a state insurance regulator for NFIP data on flood losses and claims in their state, FEMA officials approached NAIC to discuss options for sharing such data. NAIC officials told us that its catastrophe insurance working group is coordinating with FEMA actuaries to determine how FEMA could share specific data with states, without disclosing personally identifiable information. Our previous work on private-sector involvement in flood insurance also cited other concerns raised by stakeholders. Stakeholders we interviewed for that work noted several conditions that must be present to increase private-sector involvement in the sale of flood insurance, including that insurers need sufficient consumer participation to properly manage and diversify their risk. They also identified several strategies that could help create conditions that would promote the sale of flood insurance by the private sector, such as Congress eliminating discounted rates and NFIP charging full-risk rates, or NFIP providing residual insurance. In addition, we have an ongoing review examining the policy goals that have led to NFIP’s debt to Treasury—for example, providing subsidies to certain policyholders—and how the program can be reformed under different circumstances (for example, if there is increased participation in flood insurance by the private sector.) Finally, some stakeholders also told us that certain FEMA restrictions on WYOs may be an impediment to increasing the availability of private flood insurance in the market. Specifically, the NFIP Financial Assistance/Subsidy Arrangement (Arrangement) with WYOs restricts WYOs from selling stand-alone flood insurance coverage outside of NFIP. Officials from two state insurance agencies and one insurance trade association said that the FEMA restriction limits companies with the most experience in flood insurance from entering the private market. FEMA officials stated that, despite this restriction, a number of companies have found ways to offer flood insurance while remaining compliant with the Arrangement. For example, if a subsidiary of a large insurance company is a WYO, the parent company could offer stand-alone flood coverage. Alternatively the WYO could offer flood coverage as part of a multiperil policy. Since the passage of the Biggert-Waters Act, new legislation has been introduced in Congress that could help address some of the regulatory issues and a recent change made to NFIP that stakeholders cited as barriers. The proposed legislation would amend the statutory definition of private flood insurance that would satisfy the mandatory purchase requirement. That is, the definition of private flood insurance in the proposed legislation would replace the specific criteria in the Biggert- Waters Act and instead define private flood insurance as an insurance policy that (1) is issued by an insurance company that is (I) licensed, admitted, or otherwise approved to engage in the business of insurance in the state in which the insured building is located, by the insurance regulator of that state; or (II) eligible as a nonadmitted insurer to provide insurance in the home state of the insured; (2) is issued by an insurance company that is not otherwise disapproved as a surplus lines insurer by the insurance regulator of the state in which the property to be insured is located; and (3) provides flood insurance coverage that complies with the laws and regulations of that state. Some stakeholders with whom we spoke supported the proposed legislation because it relies on the expertise of state insurance regulators rather than individual regulated lending institutions. According to these stakeholders, the changes to the statutory definition of private flood insurance clarify that flood insurance approved by state insurance regulators is acceptable in satisfying the mandatory purchase requirement. But according to a consumer advocate’s January 2016 testimony to Congress, facilitating the sale of flood insurance by surplus lines carriers could create problems for consumers because surplus lines policies are subject to fewer consumer protections than admitted policies. For example, surplus lines policies are not backed by state guaranty funds and states do not regulate surplus lines policy forms, which may contain exclusions that regulators would not approve for an admitted carrier. At the same hearing, NAIC stated its support for the proposed legislation and noted that as the private flood insurance market grew, they expected state insurance regulation would continue to evolve to meet the size of the market as well as the needs of consumers. NAIC officials also told us that the proposed legislation would alleviate their concerns that the Biggert-Waters Act allows federal regulators to regulate the solvency of private flood insurers. However, Fannie Mae officials stated that the proposed legislation (H.R. 2901 as adopted by the House on April 28, 2016) would weaken its risk-management practices to the extent that it would impair Fannie Mae from maintaining or taking prudent actions to protect homeowners and collateral. Furthermore, while noting that creating a viable private flood insurance market is in the interest of the housing finance system and taxpayers, Freddie Mac officials stated that they have concerns that the proposed legislation could shift the risk of flood loss to Freddie Mac; they have been addressing these concerns with the Federal Housing Finance Agency. The proposed legislation also would amend the flood insurance statute to explicitly allow for private flood insurance coverage to satisfy any continuous coverage requirements, reversing a recent change that FEMA made to NFIP (coverage is not considered continuous if supplied through a non-NFIP policy for more than 90 days). As discussed earlier, some stakeholders cited FEMA’s change to this policy as a potential barrier to the use of private insurance. Specifically, the proposed legislation states that any period during which a property was continuously covered by private flood insurance, as defined by the proposed legislation, should be considered to be a period of continuous coverage for purposes of applying any statutory, regulatory, or administrative continuous coverage requirement. However, the proposed legislation does not directly address NFIP’s recent change in cancellation reasons, which can preclude refunds of premiums when coverage was supplied by a non-NFIP policy, or the restriction on WYOs offering stand-alone flood insurance products. Whether a state permits a particular private flood insurance policy to be offered to consumers is a different question from whether the regulated lending institution or federal regulator finds the private policy sufficient to satisfy the mandatory purchase requirement. More specifically, regulated lending institutions and federal regulators, rather than states, are responsible for ensuring compliance with the mandatory purchase requirements under federal law. However, according to the state officials with whom we spoke, states have the authority to regulate the flood insurance offered by private insurance companies in their states and some have taken steps to try to make private flood insurance more accessible. For example, Florida and West Virginia recently passed legislation that outlines the regulatory structure for insurance companies that choose to sell private flood insurance in those states. Specifically, Both states allow insurance companies to write flood insurance policies in a more customized way and charge rates in accordance with state laws. Both state laws reduce the barriers to accessing the surplus lines market for some flood insurance by not requiring an agent to attempt to place a policy with an admitted company or attempt to find a comparable admitted product. Before the enactment of these laws, the agent would have had to make a “diligent effort” to seek such coverage in the admitted market before placing a flood insurance policy with a surplus lines carrier. According to officials from both states, these laws were not necessarily needed to allow private insurers to offer flood insurance as officials from both states believed prior statutes permitted the state insurance regulator to allow for flood insurance. However, officials noted that these laws publicly encourage private companies to sell flood insurance in these states, with the intent of offering consumers more options. Similarly, according to the National Association of Professional Surplus Lines Offices (NAPSLO), 18 states have made private flood insurance more accessible by providing for easier access to the surplus lines insurance market. Furthermore, 12 of these states allow direct access to the surplus lines market with no restrictions for flood, which means that generally an insurance agent would not need to perform a diligent search. According to representatives of NAPSLO, this generally includes not having to first seek coverage with NFIP. NAPSLO officials also stated that the other six states allow for access to the surplus lines insurance market under specific circumstances, such as when a community does not participate in NFIP. The Biggert-Waters Act took steps to encourage greater participation by the private sector in flood insurance, but aspects of the act’s provisions on private flood insurance have created some regulatory uncertainty among insurers and lenders. As a result, Congress has been considering some steps designed to address potential regulatory barriers to the use of private-sector insurance to satisfy the mandatory purchase requirement, including considering legislation that would amend the statutory definition of private flood insurance. The proposed legislation also would explicitly allow for private flood insurance coverage to satisfy any continuous coverage requirements, reversing a recent change that FEMA made to NFIP cited by some stakeholders as a potential barrier to the use of private insurance. However, a recent policy change made by FEMA might discourage the use of private flood insurance. FEMA no longer allows policyholders to cancel their NFIP policy and obtain a refund, on a prorated basis, if they obtained a non-NFIP policy (private flood insurance policy). FEMA made the change based on a review of its policies brought about by the enactment of the Biggert-Waters Act and HFIAA. FEMA officials stated that FEMA can only allow the cancellation of policies and refunds according to the terms and conditions of NFIP’s standard policy, which does not directly address the refunds of premiums when a non-NFIP policy is obtained. However, FEMA could have taken steps to revise the SFIP to allow for such refunds. It is generally industry practice to refund policyholders a portion of premium that is unused if they decide to cancel their insurance policies. Allowing this type of refund would be in line with congressional interest in transferring some of the federal government’s exposure to flood insurance risk to the private sector. To address a potential challenge for consumers who wish to opt for private flood insurance and who must have insurance under the mandatory purchase requirement, we recommend that the FEMA Administrator should consider reinstating the cancellation reason code allowing policyholders to cancel their NFIP policy and be eligible for premium refunds, on a prorated basis, if they obtain a non-NFIP policy after their NFIP policy became effective. If changes are needed to NFIP’s standard flood insurance policy to allow such refunds, FEMA should take the necessary steps to amend its standard flood insurance policy. We provided a draft of this report to FCA, FDIC, Federal Reserve, FEMA within the Department of Homeland Security, HUD, NCUA, OCC, Treasury, Fannie Mae, Freddie Mac, and NAIC for review and comment. FCA, FDIC, Federal Reserve, OCC, Fannie Mae, Freddie Mac, and NAIC provided technical comments, which we incorporated, as appropriate. DHS provided a written response, reproduced in appendix IV, in which FEMA agreed with our recommendation and plans to reinstate their previous policy allowing policyholders to cancel their NFIP policy and obtain a prorated refund if they obtain a comparable non-NFIP policy. The response also stated that the policy change will be included in an April 2017 program bulletin, with an effective date of October 1, 2017, and followed up with a subsequent rulemaking. NCUA also provided written comments, reproduced in appendix V, in which agency officials stated that the agency plans to, as noted in the report, update examination policies and procedures to reflect recent changes in flood insurance regulation and work with the other regulators to finalize private flood insurance regulations. We are sending copies of this report to the appropriate congressional committee, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or cackleya@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. We were asked to revisit the issue of the private-sector involvement in flood insurance with a focus on the regulatory environment and whether it posed any barriers to private flood insurance policies being purchased by homeowners to satisfy the mandatory purchase requirement. This report describes (1) regulated lending institutions’ and federal regulators’ implementation of statutory provisions governing the use of private flood insurance to satisfy the mandatory purchase requirement; and (2) views on any regulatory, or other barriers, to the increased use of private flood insurance to satisfy the mandatory purchase requirement. To address these objectives, we reviewed flood insurance laws and regulations, related agency guidance, congressional hearings, and studies, as well as past GAO reports on insurance markets and flood insurance. We reviewed relevant federal flood insurance laws, including the Flood Disaster Protection Act, National Flood Insurance Reform Act, Biggert- Waters Flood Insurance Reform Act, and the Homeowner Flood Insurance Affordability Act, to identify requirements placed on federal entities responsible for lending regulation (federal regulators) and how each has addressed the use and acceptability of private flood insurance to fulfill the mandatory purchase requirement. The federal regulators involved in implementing the provisions of the flood insurance laws governing the mandatory purchase requirement are the Office of the Comptroller of the Currency (OCC), Board of Governors of the Federal Reserve System (Federal Reserve), Federal Deposit Insurance Corporation (FDIC), Farm Credit Administration (FCA), and National Credit Union Administration (NCUA). We also reviewed the rules implementing these statutes to determine what requirements are placed on regulated lending institutions. To understand how lenders have been fulfilling the requirements, we interviewed representatives from eight lending institutions consisting of banks and nonbanks. To identify lenders likely to have experience with private flood insurance, we used National Flood Insurance Program (NFIP) and SNL’s Mortgage Market Share data to identify the largest lenders (by volume of mortgages originated) in those counties with the largest number of NFIP flood insurance policies. To further categorize the lending institutions, we then used criteria such as the type of institution, regulatory agency responsible for supervising the institution, and size by total assets (selecting lenders from large, medium, and small asset categories). We selected a purposive sample of eight lenders with one or two institutions supervised by each regulator and took care to ensure a balanced representation of the different types and sizes of institutions. For the purposes of this report we use the term “lenders” to refer to the eight lenders with whom we spoke. Our sample included two nonbank mortgage lenders that are not directly subject to the requirements of the Flood Disaster Protection Act as they do not fit within the statutory definition of regulated lending institutions. We spoke with these nonbank lenders to better understand practices related to private flood insurance among the full scope of mortgage market participants. This selection of mortgage lenders is not generalizable to all mortgage lenders. We determined that the data used were reliable for the purpose of identifying and selecting mortgage lenders for a nongeneralizable sample. To better understand how the federal regulators have implemented the statutory requirements, we compared their regulations with the statutes. We also reviewed guidance provided by the federal regulators and the Federal Emergency Management Agency (FEMA) related to the use of private flood insurance to satisfy the mandatory purchase requirement. We reviewed and compared each of the regulator’s examination manuals and flood insurance modules to understand how they examined the regulated lending institutions they supervise for compliance. For example, we reviewed each regulator’s flood insurance examination module to determine what specific aspects of the regulations examiners were reviewing. We also interviewed officials from OCC, Federal Reserve, FDIC, FCA, NCUA, and FEMA. Finally, we reviewed the policies of the Federal National Mortgage Association (Fannie Mae), Federal Home Loan Mortgage Corporation (Freddie Mac), Department of Veterans Affairs, and Federal Housing Administration to understand their flood insurance requirements for mortgages they either purchase or insure. We also followed up with the Department of Housing and Urban Development’s Federal Housing Administration to understand any potential changes to its policies. We interviewed various stakeholders—including selected lenders, organizations in the insurance and lending industries, private flood insurance companies, and state insurance regulatory agencies—about any regulatory barriers to the increased use of private flood insurance. We interviewed representatives of the eight selected lenders on barriers they encounter in evaluating and accepting private flood insurance policies and their views on enacted and proposed legislation on private flood insurance. To obtain other views on private flood insurance, we selected a diverse group of 13 organizations to interview based on the type of organization, the organization’s purpose and relevance to private flood insurance, and membership. We identified these organizations based on previous and ongoing GAO work on flood insurance and comment letters submitted in response to federal regulators’ proposed rules on private flood insurance. We also selected and interviewed representatives of four private insurance companies involved in the private market for flood insurance. To identify these companies, we reviewed news articles; interviewed selected lenders, state insurance officials, and other industry stakeholders; reviewed comment letters submitted in response to proposed rules issued by the federal regulators on private flood insurance; and reviewed prior GAO work. To select the insurance companies to interview, we focused on those that were identified as currently writing flood insurance in the admitted or nonadmitted insurance markets. This selection of private insurance companies is not generalizable to all private insurance companies. To understand selected states’ regulation of flood insurance and obtain the perspectives of state insurance regulators on regulatory barriers related to private flood insurance, we interviewed insurance regulators and their representatives from Florida, Louisiana, Pennsylvania, Texas, and West Virginia and reviewed selected laws from Florida and West Virginia. We selected these states based on the volume of NFIP policies written and flood insurance regulatory activity in each state. The information provided by the representatives of the state insurance regulators is not generalizable to how all states oversee flood insurance. We also reviewed comment letters submitted to the regulators (OCC, Federal Reserve, FDIC, FCA, and NCUA) in response to proposed federal rules on the acceptance of private flood insurance. We obtained these comment letters by downloading each from www.regulations.gov, docket ID OCC-2013-0015. We analyzed 44 comment letters submitted by interested parties regarding the regulators’ joint proposed rules as of May 2016. We reviewed each comment letter to identify comments pertaining to potential or actual regulatory barriers to the use of private flood insurance and effects of the proposed rules. One analyst created a summary of common themes from the comment letters, which were verified by a second analyst. We also conducted a literature search for news articles and studies to determine what is known about any regulatory barriers to using private flood insurance to satisfy the mandatory purchase requirement. To identify news articles and studies, we conducted searches of various databases, such as EconLit and ProQuest, using search terms such as “private” and “flood insurance.” From these searches, we identified and reviewed 19 studies and articles since 2005 that were relevant to barriers to private flood insurance. We reviewed these studies and articles to identify regulatory barriers to the use of private flood insurance. We found that these studies and articles generally did not identify regulatory barriers but identified other relevant barriers related to increasing involvement of the private sector in flood insurance. We also interviewed representatives from the Department of the Treasury’s Federal Insurance Office and the National Association of Insurance Commissioners (NAIC) to obtain their views on regulatory barriers, enacted and proposed legislation and regulations, and NAIC’s views on states’ laws and regulations related to private flood insurance. Appendix II: Private Flood Insurance Criteria in FEMA’s Guidelines (2007) and the Biggert- Waters Act Definition (2012) The following table compares the guidance provided by Federal Emergency Management Agency (FEMA) in the 2007 Mandatory Purchase of Flood Insurance Guidelines (rescinded in February 2013) for use of private flood insurance to satisfy the mandatory purchase requirement with the statutory definition of private flood insurance in the Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act). The following sample notice is included as an appendix in the regulators’ respective flood insurance regulations. In addition to the contact named above, Allison Abrams (Assistant Director), Anar Jessani (Analyst-in-Charge), Pamela Davidson, Matthew Keeler, Marc Molino, Patricia Moye, and Barbara Roesmann made key contributions to this report.
NFIP was created, in part, because private insurers historically have been unwilling to insure against flood damage. The private flood insurance market remains small. The 2012 Biggert-Waters Act took steps to encourage private-sector participation by requiring regulators to direct lenders to accept private flood insurance to satisfy the mandatory purchase requirement—a federal requirement to purchase flood insurance on certain properties. GAO was asked to examine if the regulatory environment posed barriers to private flood insurance. This report describes (1) lender and regulator implementation of provisions on private flood insurance; and (2) views on regulatory, or other, barriers to using private flood insurance to satisfy the mandatory purchase requirement. GAO reviewed laws, regulations and guidance and interviewed officials from FEMA, five federal regulators, government-sponsored enterprises, and the National Association of Insurance Commissioners. GAO interviewed various stakeholders, selected based on their flood insurance experience and size, among other factors: a nongeneralizable sample of eight lenders; 13 organizations; five state insurance regulators; and four private flood insurers. Lenders and their regulators have taken some action to implement provisions on private flood insurance in the Biggert-Waters Flood Insurance Reform Act of 2012 (Biggert-Waters Act). Specifically, lenders told GAO they send notifications to borrowers that encourage borrowers to compare private and National Flood Insurance Program (NFIP) policies. Lenders with whom GAO spoke accepted private policies and generally said they used Federal Emergency Management Agency (FEMA) guidelines and interagency guidance to evaluate private policies. Federal regulators (Board of Governors of the Federal Reserve System, Federal Deposit Insurance Corporation, Office of the Comptroller of the Currency, National Credit Union Administration, and Farm Credit Administration) issued interagency questions and answers on private insurance in 2009, which cite the FEMA guidelines. However, FEMA rescinded its guidelines in 2013, citing a lack of authority to rule on the acceptability of private insurance policies. Federal regulators have issued joint proposed rules to implement the Biggert-Waters Act definition of private flood insurance, but have not yet finalized them. Regulators stated that the information provided in the 2009 Questions and Answers remains in effect until final rules implementing the private flood insurance provisions of the Biggert-Waters Act are adopted. Stakeholders cited a number of challenges as potentially inhibiting the use of private flood insurance to satisfy the mandatory purchase requirement. Regulatory uncertainty . Without final regulations implementing the Biggert-Waters Act requirement to accept private flood insurance, there was uncertainty among stakeholders about which private policies would satisfy the mandatory purchase requirement. Many stakeholders, including some lenders, emphasized that lenders needed discretion when evaluating policies and that ensuring policies met the Biggert-Waters Act definition would be challenging for lenders, in part due to their lack of insurance expertise. Recent NFIP changes . Stakeholders noted that a recent NFIP policy change could discourage consumers' use of private insurance. FEMA recently stopped allowing policyholders to obtain a refund of their unused NFIP premium if they obtained a non-NFIP policy. FEMA officials stated that, based on their recent review of NFIP cancellation policies, this practice was not explicitly permitted in the NFIP standard flood insurance policy terms and conditions. Due to this change, consumers who wish to obtain private coverage would forfeit any unused portion of their premium if they switched after the NFIP policy's effective date. While FEMA's standard policy terms do not specifically address refunds when a non-NFIP policy is obtained, FEMA could revise the standard policy to allow for such refunds. Allowing this type of refund would be in line with industry practice to allow refunds of paid premiums when cancelling insurance policies, as well as Congressional interest in transferring some of the federal government's exposure to flood insurance risk to the private sector. Market challenges . Many stakeholders noted that low private sector participation in flood insurance was also due to market challenges, some citing the inability to compete with discounted NFIP rates as a primary barrier—a finding that GAO also reported in previous work (GAO-14-127). GAO recommends that FEMA reconsider allowing policyholders who cancel their NFIP policy to be refunded, on a prorated basis, when obtaining a non-NFIP policy and take any necessary steps to amend the NFIP standard policy to do so. FEMA agreed with our recommendation.
TSA receives thousands of air passenger screening complaints through five centralized mechanisms but does not have an agencywide policy, consistent processes, or an agency focal point to guide the receipt of these complaints, or “mine” these data to inform management about the nature and extent of the screening complaints to help improve screening operations and customer service. For example, TSA data indicate the following: From October 2009 through June 2012, TSA received more than 39,000 screening complaints through its TSA Contact Center (TCC), including more than 17,000 complaints about pat-down procedures. From October 2009 through June 2012, TSA’s Office of the Executive Secretariat received approximately 4,000 complaints that air passengers submitted by mail. From April 2011 (when it was launched) through June 2012, the agency’s Talk to TSA web-based mechanism received approximately 4,500 air passenger screening complaints, including 1,512 complaints about the professionalism of TSA staff during the screening process. However, the data from the five centralized mechanisms do not reflect the full nature and extent of complaints because local TSA staff have discretion in implementing TSA’s complaint processes, including how they receive and document complaints. For example, comment cards were used in varying ways at 6 airports we contacted. Specifically, customer comment cards were not used at 2 of these airports, were on display at 2 airports, and were available upon request at the remaining 2 airports we contacted. TSA does not have a policy requiring that complaints submitted using the cards be tracked or reported centrally. We concluded that a consistent policy to guide all TSA efforts to receive and document complaints would improve TSA’s oversight of these activities and help ensure consistent implementation. TSA also uses TCC data to inform the public about air passenger screening complaints, monitor operational effectiveness of airport security checkpoints, and make changes as needed. However, TSA does not use data from its other four mechanisms, in part because the complaint categories differ, making data consolidation difficult. A process to systematically collect information from all mechanisms, including standard complaint categories, would better enable TSA to improve operations and customer service. Further, at the time of our review, TSA had not designated a focal point for coordinating agencywide policy and processes related to receiving, tracking, documenting, reporting, and acting on screening complaints. Without a focal point at TSA headquarters, the agency does not have a centralized entity to guide and coordinate these processes, or to suggest any additional refinements to the system. To address these weaknesses, we recommended that TSA establish a consistent policy to guide agencywide efforts for receiving, tracking, and reporting air passenger screening complaints; establish a process to systematically compile and analyze information on air passenger screening complaints from all complaint mechanisms; and designate a focal point to develop and coordinate agencywide policy on screening complaint processes, guide the analysis and use of the agency’s screening complaint data, and inform the public about the nature and extent of screening complaints. The Department of Homeland Security (DHS) concurred with the recommendations and indicated actions that TSA had taken, had underway, and was planning to take in response. For example, DHS stated that TSA would review current intake and processing procedures at headquarters and in the field and develop policy, as appropriate, to better guide the complaint receipt, tracking, and reporting processes. We believe that these are beneficial steps that would address the recommendation, provided that the resulting policy refinements improve the existing processes for receiving, tracking, and reporting all air passenger screening complaints, including the screening complaints that air passengers submit locally at airports through comment cards or in person at security checkpoints. In commenting on a draft of our November 2012 report, TSA also stated that the agency began channeling information from the Talk to TSA database to the TCC in October 2012. However, DHS did not specify in its letter whether TSA will compile and analyze data from the Talk to TSA database and its other centralized mechanisms in its efforts to inform the public about the nature and extent of screening complaints, and whether these efforts will include data on screening complaints submitted locally at airports through customer comment cards or in person at airport security checkpoints. DHS also did not provide sufficient detail for us to assess whether TSA’s planned actions will address the difficulties we identified in collecting standardized screening data across different complaint categories and mechanisms. DHS stated that the Assistant Administrator for the Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement was now the focal point for overseeing the key TSA entities involved with processing passenger screening complaints. It will be important for the Assistant Administrator to work closely with, among others, the office of the Assistant Administrator of Security Operations because this office oversees screening operations at commercial airports and security operations staff in the field who receive screening complaints submitted through customer comment cards or in person at airport security checkpoints. We will continue to monitor TSA’s progress in implementing these recommendations. TSA has several methods to inform passengers about its complaint processes, but does not have an agencywide policy or mechanism to ensure consistent use of these methods among commercial airports. For example, TSA has developed standard signs, stickers, and customer comment cards that can be used at airport checkpoints to inform passengers about how to submit feedback to TSA; however, we found inconsistent use at the 6 airports we contacted. For example, customer comment cards were displayed in the checkpoints at 2 airports, while at 2 others the cards were provided upon request. However, we found that passengers may be reluctant to ask for such cards, according to TSA. TSA officials at 4 of the 6 airports also said that the agency could do more to share best practices for informing passengers about complaint processes. For example, TSA holds periodic conference calls for its Customer Support Managers—TSA staff at certain commercial airports who work in conjunction with other local TSA staff to resolve customer complaints and communicate the status and resolution of complaints to air passengers—to discuss customer service. However, Customer Support Managers have not used this mechanism to discuss best practices for informing air passengers about processes for submitting complaints, according to the officials we interviewed. Policies for informing the public about complaint processes and mechanisms for sharing best practices among local TSA officials could help provide TSA reasonable assurance that these activities are being conducted consistently and help local TSA officials learn from one another about what practices work well. We recommended that TSA establish an agencywide policy to guide its efforts to inform air passengers about the screening complaint processes and establish mechanisms, particularly at the airport level, to share information on best practices for informing air passengers about the screening complaint processes. DHS concurred with the recommendation and stated that TSA would develop a policy to better inform air passengers about the screening complaint processes. We will continue to monitor TSA’s progress in implementing this recommendation. TSA’s complaint resolution processes do not fully conform to standards of independence to ensure that these processes are fair, impartial, and credible, but the agency is taking steps to improve independence. Specifically, TSA airport officials responsible for resolving air passenger complaints are generally in the same chain of command as TSA airport staff who are the subjects of the complaints. While TSA has an Ombudsman Division that could help ensure greater independence in the complaint processes, the division primarily focuses on handling internal personnel matters and is not yet fully equipped to address external complaints from air passengers, according to the head of the division. TSA is developing a new process for referring air passenger complaints directly to the Ombudsman Division from airports and for providing air passengers an independent avenue to make complaints about airport security checkpoint screening. In August 2012, TSA’s Ombudsman Division began addressing a small number of air passenger complaints forwarded from the TCC, according to the head of that division. TSA also began advertising the division’s new role in addressing passenger screening complaints via the TSA website in October 2012. According to the Assistant Administrator of TSA’s Office of Civil Rights & Liberties, Ombudsman and Traveler Engagement, the division will not handle complaints for which there exists an established process that includes an appeals function, such as disability complaints or other civil rights or civil liberties complaints, in order to avoid duplication of currently established processes. According to the Assistant Administrator, the agency also plans to initiate a Passenger Advocate Program by January 2013, in which selected TSA airport staff will be trained to take on a collateral passenger advocate role, respond in real time to identify and resolve traveler-related screening complaints, and assist air passengers with medical conditions or disabilities, among other things. It is too early to assess the extent to which these initiatives will help mitigate possible concerns about independence. TSA officials stated that the agency is undertaking efforts to focus its resources and improve the passenger experience at security checkpoints by applying new intelligence-driven, risk-based screening procedures and enhancing its use of technology. One component of TSA’s risk-based approach to passenger screening is the Pre✓™ program, which was introduced at 32 airports in 2012, and which the agency plans to expand to 3 additional airports by the end of the calendar year. The program allows frequent flyers of five airlines, as well as individuals enrolled in other departmental trusted traveler programs—where passengers are pre-vetted and deemed trusted travelers—to be screened on an expedited basis. This program is intended to allow TSA to focus its resources on high-risk travelers. According to TSA, more than 4 million passengers have been screened through this program to date. Agency officials have reported that with the deployment of this program and other risk-based security initiatives, such as modifying screening procedures for passengers 75 and over and active duty service members, TSA has achieved its stated goal of doubling the number of passengers going through expedited screening. According to TSA, as of the end of fiscal year 2012, over 7 percent of daily passengers were eligible for expedited screening based on low risk. However, the estimated number of passengers that will be screened on an expedited basis is still a relatively small percentage of air passengers subject to TSA screening protocols each year. We plan to begin an assessment of TSA’s progress in implementing the TSA Pre✓™ program in 2013. Chairman Petri, Ranking Member Costello, and Members of the Subcommittee, this concludes my prepared remarks. I look forward to responding to any questions that you may have. For questions about this statement, please contact Steve Lord at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Jessica Lucas-Judy (Assistant Director), David Alexander, Thomas Lombardi, Anthony Pordes, and Juan Tapia-Videla. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the findings of our November 2012 report assessing the Transportation Security Administration's (TSA) efforts to improve the air passenger screening complaints processes. TSA screens or oversees the screening of more than 650 million air passengers per year at 752 security checkpoints in more than 440 commercial airports nationwide, and must attempt to balance its aviation security mission with competing goals of efficiency and respecting the privacy of the traveling public. The agency relies upon multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. These layers focus on screening millions of passengers and pieces of carry-on and checked baggage, as well as tons of air cargo, on a daily basis. Given TSA's daily interaction with members of the traveling public, air passenger screening complaints reflect a wide range of concerns about, for example, the systems, procedures, and staff that TSA has used for screening air passengers at security checkpoints. This includes concerns related to the use of Advanced Imaging Technology and enhanced pat-down procedures. TSA screens or oversees the screening of more than 650 million air passengers per year at 752 security checkpoints in more than 440 commercial airports nationwide, and must attempt to balance its aviation security mission with competing goals of efficiency and respecting the privacy of the traveling public. The agency relies upon multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. These layers focus on screening millions of passengers and pieces of carry-on and checked baggage, as well as tons of air cargo, on a daily basis. TSA has processes for addressing complaints about air passengers' screening experience at security checkpoints, but concerns have been raised about these processes. Also, TSA is implementing a Pre✓™ program to expedite screening at security checkpoints. This statement primarily based on our November 2012 report and, like the report, discusses the extent to which TSA has (1) policies and processes to guide the receipt of air passenger screening complaints, and uses this information to monitor or enhance screening operations, (2) a consistent process for informing passengers about how to make complaints, and (3) complaint resolution processes that conform to independence standards to help ensure that these processes are fair and impartial. As requested, the statement also describes TSA's recent efforts to make the screening process more risk-based and selective through use of TSA's Pre✓™ program. In summary, TSA receives thousands of air passenger screening complaints through five central mechanisms, but does not have an agencywide policy, consistent processes, or a focal point to guide receipt and use of such information. Also, while the agency has several methods to inform passengers about its complaint processes, it does not have an agencywide policy or mechanism to ensure consistent use of these methods among commercial airports. In addition, TSA's complaint resolution processes do not fully conform to standards of independence to ensure that these processes are fair, impartial, and credible, but the agency is taking steps to improve independence. To address these issues, we made four recommendations to TSA with which the agency concurred, and it indicated actions it is taking in response. Finally, TSA officials stated that the agency is undertaking efforts to focus its resources and improve the passenger experience at security checkpoints by applying new intelligence-driven, risk-based screening procedures, including expanding its Pre✓™ program. TSA plans to have this program in place at 35 airports by the end of the calendar year and estimates that it has screened more than 4 million passengers to date through this program.
The Corps is the world’s largest public engineering, design, and construction management agency, responsible for water resources infrastructure such as dams, levees, hurricane barriers, and floodgates in every state. Through its Civil Works program, the Corps plans, designs, and operates water resources infrastructure projects. The Civil Works program is organized into 3 tiers: a national headquarters in Washington, D.C.; 8 regional divisions that were established generally according to watershed boundaries; and 38 districts nationwide. In addition, the Corps maintains national and regional centers that provide technical services to Corps divisions and districts, such as support of dam safety repair projects. The Assistant Secretary of the Army for Civil Works (ASA(CW)), appointed by the President, establishes the strategic direction, develops policy, and supervises the execution of the Civil Works program. The Corps headquarters and regional division offices primarily implement policies and provide oversight to district offices. The Corps headquarters’ Dam Safety Officer (DSO), a civilian official, is responsible for all dam safety activities, including establishing policy and technical criteria for dam safety and prioritizing dam-safety-related work. The eight divisions, commanded by military officers, coordinate civil works projects in the districts within the eight respective geographic areas. The Corps districts, commanded by military officers, are responsible for planning, engineering, constructing, and managing water resources infrastructure projects in their districts as well as coordinating with the Corps’ sponsors. Most of the Corps’ dams are one of two types: earthen or concrete. According to Corps data, about 68 percent of Corps dams have earthen embankments, constructed of various types of materials such as clay, silt, sand, or gravel. Another 30 percent of Corps dams are concrete dams. Dams can have various features, such as spillway gates and conduit outlets, to control water releases, as well as auxiliary spillways to divert water flows in the event of expected maximum flood conditions. (See fig. 1.) To ensure continued safe operation, Corps dams undergo routine maintenance, such as cleaning of drains and mowing of embankments, but in some cases require major repairs, which, as defined by the Corps, are those that cost over $16 million. These repairs may be to: rehabilitate spillway gate equipment to safely pass excess water, build cutoff walls to prevent erosion to embankments or foundations fill voids in embankments or foundations with grout, build shear walls to increase dam stability, increase dam’s height to prevent overtopping, or anchor a dam to its foundation. Since 2005, the Corps has used a risk-informed approach to select dams for safety-related repairs. While integrating traditional engineering analyses and standards, the risk-informed approach aims to identify and prioritize the most critical dam safety risks rather than eliminate all potential risks. To that end, the Corps has developed the Dam Safety Action Classification (DSAC) system, based on a 5-point scale, to help guide key decisions for dam safety repairs. This risk classification system reflects the probability of a dam’s failure and resulting potential consequences due to failure. As of July 2015, the Corps has placed 309 dams (about 44 percent) in actionable categories (DSAC 1, 2, and 3) because the dams were determined to be at moderate to very high risk of failure. In particular, the Corps has classified 17 dams as DSAC 1 (very high urgency), 76 dams as DSAC 2 (high urgency), and 216 dams as DSAC 3 (moderate urgency). From fiscal year 2007 to fiscal year 2016, the Corps selected 16 of these DSAC 1 and 2 dams for repairs. According to the Corps’ Safety of Dams regulation, once a dam has been selected as needing repair according to its DSAC designation, the Corps is to take the following steps to study, design, and construct a dam safety repair project. Study: Corps district officials are to conduct a dam safety modification study to determine a long-term solution. This study is to involve risk analyses, determination of potential failure modes, evaluation of alternatives to address potential failures, and development of a recommended technical solution with its estimated cost. The study also is to identify cost share sponsors and to recommend an applicable authority for cost sharing purposes (discussed later in this report) under which to implement the repair work. The results of the study are published in a dam safety modification report, which is forwarded to division and headquarters officials, including the DSO, for review and approval of recommended repairs. The Corps districts are to communicate to sponsors and the public about dam failure risks and potential repairs during the study phase. Once approved by Corps’ DSO and ASA(CW), the cost estimate in the dam safety modification report is used as a basis to request funds from Congress for design and construction. Design: Project design takes place at the Corps districts and dam safety production centers, involving investigation of site conditions, such as testing soils, engineering analysis, and development of design plans and specifications. In addition, further risk analyses are to be conducted as well as expert reviews of the design. During the project’s design, the Corps districts are also to communicate to sponsors and the public about their plans for conducting repairs. Construction: Project construction, managed by district officials, is typically carried out through contracts with private companies. Construction for dam safety repairs can take multiple years and involve several contracts. To assure construction quality, the Corps districts are required to conduct regular inspections. In addition, Corps officials are to continue their outreach and communications with sponsors and the public throughout the construction period. Sponsors share in the costs of dam safety repairs based on original congressional authorizations for dam construction or subsequent sponsors’ agreements with the Corps. A wide array of entities can be cost sharing sponsors, including federal, state, and local agencies as well as private entities. Sponsors may be identified at the time of original dam construction or at a later time. Congressional authorizations or sponsors’ agreements with the Corps delineate the benefits sponsors receive as well as their responsibilities and cost sharing obligations. Cost sharing terms are unique to each sponsor at each dam. Commensurate with benefits derived from use of a dam, sponsors typically pay a percentage of a dam’s annual operations and maintenance costs, as well as the same percentage of total costs of major dam safety repairs. Cost sharing percentages can range from under 1 percent, such as for small water supply users, to over 50 percent, such as for hydropower users, depending on a sponsor’s agreement with the Corps. Sponsors’ payment mechanisms for dam safety repairs vary. When the Corps determines a need for dam safety repairs, it typically budgets for and funds the entire amount of the repair upfront. Sponsors, responsible for sharing in the design and construction costs for dam safety repair projects, pay their cost shares in different ways as described below and in table 1. However, not all Corps dams have cost sharing sponsors. The federal government fully funds the repairs of those Corps dams that do not have sponsors. Non-federal sponsors, depending on their agreement with the Corps, are to pay their cost share either on a “pay-as-you-go” basis or at the end of the project. Sponsors that are identified at the time of initial dam construction typically pay their cost share on a pay-as-you-go basis. In these situations, sponsors contribute their cost share while project design and construction are ongoing. Sponsors—typically water utilities—that enter into agreements with the Corps subsequent to the dam’s initial construction have the option to pay as you go or in lump sum, with interest, at the end of the dam safety repair project, once all costs are finalized and calculated. According to Corps officials, non-federal sponsors may seek an exception to amortize their cost share payments over time following project completion. The Corps collects and tracks payments submitted by non-federal sponsors and transmits them to the U.S. Treasury. Federal sponsors of Corps dams are the U.S. Department of Energy’s four Power Marketing Administrations (PMA). PMAs sell the electrical output of federally owned and operated hydroelectric dams. PMAs market wholesale power by entering into contracts with customers, with preference given to not-for-profit public-owned utilities, to sell power at set rates. Through their rates, PMAs recover all costs associated with power production and transmission, including their cost share for dam safety repairs, which they remit directly to the U.S. Treasury. PMAs are to recover all associated power production costs within a reasonable period of time, which the Department of Energy has traditionally considered to be 50 years or less. According to the Corps’ Safety of Dams regulation, during a dam safety modification study, Corps district officials are to identify and analyze all the potential ways that a dam could fail. Such potential failure modes can include: (1) embankment or foundation erosion through seepage; (2) inability of a dam to safely pass excess water during expected maximum flood conditions (hydrologic failure mode); or (3) inability of a dam to withstand the expected maximum earthquake (seismic failure mode). Once potential failure modes, among other things, are determined, Corps district officials are to generate a dam safety modification report that reviews alternatives and recommends a technical solution to address the potential failure modes. For cost sharing purposes, the regulation requires the district to recommend in the report one of the two types of cost sharing arrangements or authorities: Major Rehabilitation authority or Dam Safety Assurance authority. The potential failure mode is the primary factor in determining the applicable authority, in addition to consideration of policy and statutory requirements: Major Rehabilitation: According to Corps officials, this authority applies to dam safety repairs associated with typical degradation of dams over time. Under this authority, sponsors are to pay their full cost share. For example, if a sponsor’s agreed cost share is 10 percent, then the sponsor is responsible for 10 percent of the total cost of the dam safety repair project. (See table 2.) The Corps’ regulation requires application of Major Rehabilitation authority if embankment or foundation erosion through seepage or instability is determined to be the potential failure mode. Dam Safety Assurance: In certain situations, however, the Corps can apply its Dam Safety Assurance authority, which significantly reduces sponsors’ cost shares. This authority, based on Section 1203 of the Water Resources Development Act (WRDA) of 1986, applies to safety- related dam modifications needed as a result of new hydrologic or seismic data or changes in state-of-the-art design or construction criteria deemed necessary for safety purposes (state-of-the-art provision). This authority reflects, in part, the availability of new information—such as current hydrologic models or seismic studies—that could indicate a dam’s increased vulnerability and greater risk of failure. Application of this authority reduces a sponsor’s responsibility to 15 percent of its agreed cost share, effectively reducing a sponsor’s cost share obligation by 85 percent. For example, if a sponsor’s agreed cost share is 10 percent, then the sponsor is responsible for 15 percent of this amount, meaning that it would be responsible for 1.5 percent of the total cost of a dam safety repair project. (See table 2.) The final determination of cost sharing authority is reviewed through the Corps’ chain of command. The Corps’ DSO is to review and approve the dam safety modification report and determination of funding authority. Subsequently, the ASA(CW) office is to review the DSO decision and determine if it concurs. Sponsors have no formal role in the Corps’ authority determination. According to Corps officials, while the sponsors are typically involved in cost sharing discussions, funding authority determination is a federal responsibility and not subject to appeals from sponsors. The Corps applied either its Major Rehabilitation or Dam Safety Assurance authority to the 16 dams selected for dam safety repairs from fiscal year 2007 to fiscal year 2016, selecting the funding authority to address each dam’s determined potential failure mode consistent with its regulation. (See app. II.) The total estimated cost for these repairs is $5.8 billion. For 11 of the 16 dams the Corps applied its Major Rehabilitation authority. At 9 of these 11 dams, the potential failure mode was determined to be embankment or foundation erosion through seepage, and the Corps implemented dam safety repair projects under its Major Rehabilitation authority consistent with its regulation. Sponsors for these dams are to pay their full cost share, estimated at $574 million of the total $4.2 billion in repairs. For the 5 remaining dams, the Corps applied its Dam Safety Assurance authority because repairs were determined to be the result of new hydrologic or seismic data indicating the potential inability of these dams to safely pass excess water during expected maximum flood conditions or to withstand the expected maximum earthquake. The sponsors for these dams are to pay 15 percent of their cost share—which cumulatively total an estimated $31 million of the total $1.6 billion in repairs for these dams. While the Corps applied the Dam Safety Assurance authority to 5 of 16 dams in our review based on the availability of new hydrologic or seismic data, it did not apply the Dam Safety Assurance authority’s state-of-the- art provision to any of these dam safety repair projects. According to ASA(CW) officials, the Corps has not applied the state-of-the-art provision since enactment of the enabling legislation (WRDA of 1986). When asked why the Corps had not applied this provision, ASA(CW) officials said that they would consider applying the state-of-the-art provision on a case-by-case basis, but they have never been presented with a case that they determined to have merited it. Additionally, ASA(CW) officials were unable to define the conditions under which the provision could apply or to provide a hypothetical example of a dam safety issue that would lead them to use it. The circumstances under which the state-of-the-art provision might apply have not been identified in the Corps regulations, and the Corps has not had a consistent policy position regarding when the state-of-the-art provision might apply. The Corps’ 1997 regulation states that dam safety repairs required due to state-of-the-art changes would be decided on a case-by-case basis, but does not identify criteria for how the cases would be selected. However, in 2011, and again in the 2014 update, the Corps’ Safety of Dams regulation discusses application of Dam Safety Assurance authority only with regard to new hydrologic or seismic data, stating that the state-of-the-art provision would not be applied. Specifically, the 2014 regulation notes the difficulty of defining the state- of-the-art provision and states that because the state-of-the-art “terminology makes it difficult to define the kinds of repairs that would be applicable, it is not used.” The same 2014 regulation states that use of the state-of-the-art provision must be decided on a case-by-case basis by the ASA(CW). Internal control standards state that information and effective communication are needed for an agency to achieve all of its objectives. Moreover, internal controls guidance states that effective communication may be achieved through clear policy. However, the Corps’ current regulation is not clear as to what is meant by “state-of-the-art design or construction criteria deemed necessary for safety purposes” in the statutory provision. Thus, this lack of clarity coupled with the Corps’ inconsistent policy position has hindered the Corps from applying the state-of-the-art provision in a manner consistent with other Dam Safety Assurance provisions. Without clarifying the circumstances under which the state-of-the-art provision applies and implementing the policy consistently, the Corps is at risk of not applying the full range of statutory authorities provided to it, thereby raising questions about the appropriate allocation of federal and non-federal funding for dam safety repairs. As discussed later in this report, the Corps’ inaction in setting a clear policy for a provision under which sponsors face significant financial impacts has contributed to conditions under which sponsors have asserted their own terms for use of the provision or are considering taking legal action against the Corps. In contrast, another federal agency has applied a similar state-of-the-art provision to its dam safety repairs. The U.S. Department of the Interior’s Bureau of Reclamation (Reclamation) has a similar statutory authority enacted by the Reclamation Safety of Dams Act of 1978, which requires sponsors’ cost share at 15 percent when modifications result from new hydrologic or seismic data, or changes in state-of-the-art design or construction criteria deemed necessary for safety purposes. According to Reclamation officials, while Reclamation has not developed a definition for the state-of-the-art design or construction criteria, it has operationalized and applied the state-of-the-art provision exclusively to modify 30 dams since 1978, primarily in situations where defensive dam safety measures, such as filters and drainage mechanisms, were lacking or were not consistent with the current state of the practice. The Corps’ lack of clarity and a consistent policy position regarding the state-of-the-art provision under the Dam Safety Assurance authority has contributed to disagreements with a major sponsor and uncertainty regarding sponsor payment. In this case, Southeastern Power Administration (SEPA), the federal PMA sponsor for Center Hill (Tennessee) and Wolf Creek (Kentucky) dams, has disagreed with the Corps’ decision to repair the dams under its Major Rehabilitation authority rather than the state-of-the-art provision of the Dam Safety Assurance authority. (See fig. 2.) SEPA has asserted that the Dam Safety Assurance authority should apply to these projects. SEPA has taken this position, in part, because while dam safety repairs at Wolf Creek were originally determined to be under the Major Rehabilitation authority, Corps district officials had subsequently recommended using the Dam Safety Assurance authority based on application of the state-of-the-art provision. SEPA was aware of the district’s recommendation to change the authority determination to Dam Safety Assurance. However, the ASA(CW) ultimately did not support this recommendation noting that erosion caused by seepage—the potential failure mode identified at these dams—has consistently and categorically been addressed through application of the Major Rehabilitation authority. According to SEPA officials, the conflicting actions of Corps district and headquarters officials on authority determination created uncertainty for SEPA regarding the Corps’ position. SEPA stated that the need for repairs to Center Hill and Wolf Creek dams is based on state-of-the-art design and construction practices and notes that the Corps consulted with recognized international experts to design the cutoff walls being built at these dams to address the effects of seepage. According to SEPA officials, current repairs based on state-of- the-art practices are being made at these two dams, in part, because previous repair efforts did not adequately address site conditions contributing to seepage. Conversely, Corps officials told us that seepage naturally occurs at all dams and periodically needs to be addressed, such as through implementation of repair projects. Moreover, according to Corps officials, the “karst” limestone upon which the Center Hill and Wolf Creek dams are built is prone to increasing seepage over time because of the dissolution of soluble rock foundation. Concrete cut- off walls put in place at Center Hill and Wolf Creek dams under current projects were designed to consider these effects and, according to Corps officials, constructed as permanent seepage control measures. Because of the high cost of repairs to these two dams—estimated at about $958 million, for which SEPA’s share under its original congressional authorization is about 50 percent—SEPA officials have expressed concern about the agency’s ability to recover costs if the projects are considered under the Major Rehabilitation authority. Under this authority, SEPA’s cost to recover for both dams is estimated at about $482 million. Officials said that if SEPA were obligated to recover this amount, its hydropower rates could become prohibitively expensive. As a result, according to these officials, SEPA’s customers might terminate their contracts and acquire energy via more economical options, such as energy derived from natural gas or coal. If the Corps were to apply its Dam Safety Assurance authority to these repairs under, for example, the state-of-the-art provision, SEPA’s cost to recover would be reduced to about $72 million (85 percent reduction). The outcome related to the disagreement between the Corps and SEPA has significant implications given that mitigating the effects of seepage, as evidenced by our review, is a common reason for making safety- related repairs. In recent rate-making notices, SEPA has based its proposed rates on the Dam Safety Assurance authority for dam safety repairs at Center Hill and Wolf Creek dams. This action signals SEPA’s position that it should pay the reduced cost share (about $72 million) provided under this authority, and without resolution, recovering federal outlays for funding the majority of project costs (about $410 million) remains uncertain. In moving forward to resolve this disagreement, it is important that potential impacts on aging dam infrastructure, hydropower rates, and the federal budget are considered in a coordinated, strategic approach. SEPA’s rate actions could set precedent and create uncertainty for the federal government if sponsors at other dams also assert that the state- of-the-art provision applies to projects that mitigate the effects of seepage. For example, the Corps determined that repairs to mitigate the effects of seepage were needed at 9 of the 16 dams we reviewed, with a total estimated cost of about $4 billion. If other sponsors at these dams were to follow SEPA’s example, the federal government could potentially receive reduced cost share payments from these sponsors. Further, in light of its aging infrastructure, more Corps dams could require seepage- related repairs in the future. A policy that clarifies the Corps’ application of the state-of-the-art provision could help to minimize potential disagreements with sponsors and lead to greater certainty concerning the federal government’s and project sponsors’ cost sharing obligations. The Corps’ Safety of Dams regulation requires Corps districts to engage sponsors by notifying them during the study phase about the dam safety repair project and their estimated financial responsibility. The regulation further states: "Requirements for cost sharing and the identification of non-Federal sponsors (or partners) must occur very early in the study process to ensure that the non-Federal interests are willing cost share partners. Uncertainty about sponsorship and the lack of meaningful sponsor involvement in the scope and extent of dam safety repairs can cause delays to the dam safety modification work." As mentioned previously, under the Corps’ regulations, Corps district officials are also expected to communicate with sponsors throughout project design and construction as well as officially notify sponsors of their final cost share payment upon the project’s completion. Additionally, internal control standards state that managers should effectively communicate with external stakeholders that may have a significant impact on the agency achieving its goals. While the Corps Safety of Dams regulation identifies when communication with sponsors is to occur, it does not provide clear guidance on how to effectively communicate with sponsors to establish and implement cost sharing agreements. Based on our discussions with state, local, and private sponsors of the dams we reviewed, we found that the Corps has generally established good relationships with these non- federal sponsors and communicated project status information; however, some Corps districts were not timely or effective in communicating and reaching agreement on cost sharing responsibilities. Of the 16 dam safety repair projects we reviewed, 9 had sponsors, and—as discussed below—at 3 of the 9 dams the Corps did not communicate with the sponsors in a manner that would ensure their meaningful involvement and willingness to be cost sharing partners, as required by its regulation. According to the agreements, these sponsors are to pay their cost share to the Corps, which remits these funds to the U.S. Treasury. However, at least three sponsors have expressed concerns and indicated resistance about paying their determined cost shares, estimated to be about $3.1 million. Because the Corps does not have clear guidance to ensure effective communications with sponsors, it did not adequately communicate or reach agreements on cost sharing responsibilities with these sponsors. As a result, these sponsors’ plans for paying their cost shares are uncertain, leaving the recovery of federal outlays from these sponsors similarly uncertain. Tuttle Creek Dam: At Tuttle Creek dam (Kansas), the Corps identified and contacted one water supply sponsor during the study phase (2000–2002) of a dam stabilization project as well as notified the sponsor of its estimated cost share, but otherwise did not effectively engage the sponsor throughout the project to ensure the sponsor’s cost share payment. In a 2002 letter to the Corps, the sponsor asserted its position that it should not be required to pay for repairs to stabilize the dam, a repair that would enable the dam to withstand the expected maximum earthquake. In the sponsor’s opinion, the sponsor was not responsible for sharing costs related to changes in the Corps’ design standards or to address what the sponsor felt were design flaws. In 2003, the Corps responded to the sponsor reiterating the sponsor’s responsibility for sharing in the costs of the project. The Corps’ written response included its estimate of the sponsor’s cost share, approximately $770,000, and described payment options: pay- as-you-go or lump sum at the end of construction. According to the sponsor, it did not raise any further objections and, in a subsequent telephone conversation with Corps district officials, indicated its preference to use the pay-as-you-go option because it would be unable to afford a lump sum payment. Since 2003, the sponsor received briefings on the status of the project; however, the Corps did not follow up or otherwise engage the sponsor to pay incrementally while construction was ongoing. Construction was completed in 2010, but as of October 2015, the Corps had not requested payment or notified the sponsor of its final cost share. Corps officials told us that they are preparing a billing letter to send to the sponsor. Rough River Dam: At Rough River dam (Kentucky), the Corps’ 2012 dam safety modification report stated that the project to grout and construct a 1,700-foot cutoff wall would be completed at full federal expense with no cost sharing sponsors. However, subsequent reviews by Corps headquarters identified water supply contract holders, and in 2013, the Corps notified three water supply sponsors of their cost sharing responsibilities for the dam safety repair. Due to uncertainty in identifying sponsors and delays in executing agreements with them, as discussed below, the Corps may experience challenges collecting these sponsors’ cost shares when the project is finally complete, estimated to be no later than 2021. Specifically: One sponsor has had a water use agreement with the Corps since 1978, but has not been drawing water from the reservoir since 2007. In 2013, the Corps requested that the sponsor remove its water intake structure from the reservoir. However, in the same year, as mentioned previously, the Corps notified this sponsor of its cost sharing responsibilities for the dam safety repair project. In May 2015, the Corps signed a termination agreement with the sponsor under which the sponsor will not share the costs of the project. While the Corps is not expecting to collect a cost share, its interaction with the sponsor indicates a lack of effective communication. Although the Corps notified a second sponsor of its cost sharing responsibilities in 2013, this sponsor currently does not have a cost sharing responsibility for the dam safety repair project because the sponsor paid upfront for “major capital replacement” as part of its 1966 agreement with the Corps. This provision of the agreement is to expire in April 2016, and according to Corps officials, a supplement to the agreement is being developed. The supplement would include this sponsor’s cost sharing responsibility in the current project. However, we were not able to reach this sponsor to confirm its intention to be a cost sharing sponsor, and it remains uncertain whether the Corps should expect a future agreement to cover current project costs. The third sponsor has been drawing water from the reservoir since 2002, when the sponsor negotiated terms of its water use with the Corps under a draft contract. Despite drawing up to 1.6 million gallons per day from the reservoir, the sponsor has not paid the Corps for water use and operations and maintenance expenses because a contract between the parties has not been executed. As a result, despite notifying the sponsor of its cost sharing responsibilities in the dam safety repair project in 2013, the Corps has no mechanism to compel payment from this sponsor. According to the sponsor, it has tried to finalize the 2002 contract numerous times, but the Corps did not finalize the agreement in any of these instances. In July 2015, a Corps district official told us that Corps headquarters is reviewing the negotiated agreement; however, uncertainty about cost sharing exists until all parties execute a contract. Center Hill Dam: At Center Hill Dam (Tennessee), the Corps identified three water supply sponsors during the study phase but generally had minimal interactions with them to communicate cost sharing estimates and responsibilities. While two sponsors accept their cost sharing responsibilities and estimated cost sharing amounts, one sponsor disagrees with the Corps’ application of the Major Rehabilitation authority. Similar to the argument made by SEPA, which is also a sponsor at this dam, this water supply sponsor stated that the repairs being made to address the effects of seepage at the dam incorporate state-of-the-art design and construction practices and that the Corps should apply the state-of-the-art provision, thereby reducing this sponsor’s cost share. Under the Major Rehabilitation authority, this sponsor has a $1.9 million cost share. According to this sponsor, a municipal water utility, covering this cost would require raising water rates approximately 50 cents per household per month. The sponsor is contemplating a legal challenge if the Corps does not apply the state-of-the-art provision to lower this sponsor’s cost share according to a sponsor official. The Corps has maintained its position that application of its Major Rehabilitation authority is appropriate for this dam safety repair. Considering the significant cost of dam safety repair projects, and the number of dams that could need repairs in the future, implementing a dam safety program as effectively as possible is important. This implementation would include adequately defining conditions for key policy determinations to ensure the appropriate allocation of federal versus non-federal funds for dam safety repairs. However, the fact that the Corps has not developed policy guidance on the types of circumstances under which the state-of-the-art provision of its Dam Safety Assurance authority might apply, and has not had a consistent policy position, limits the Corps’ ability to ensure the effective implementation of the dam safety program. Without clarifying the circumstances under which the state-of-the-art provision applies and implementing the policy consistently, the Corps is at risk of not applying the full range of statutory authorities provided to it. Moreover, because of the financial implications of its authority determinations for sponsors, the Corps’ inaction in setting a clear policy for this provision contributes to conditions under which it is potentially exposed to adverse actions of these sponsors. The Corps’ engagement of project sponsors is critical to the successful implementation of dam safety repair projects not only to ensure the continued provision of benefits, such as water supply and hydropower generation, but also to recover federal outlays used to fund projects upfront. Because the Corps has not always effectively communicated with or engaged sponsors, some are deriving benefits from dams absent an agreement with the Corps while other sponsors that have agreements either have not been notified by the Corps of their final cost share responsibility or dispute the Corps’ cost sharing determination and may raise a legal challenge. While the Corps’ Safety of Dams regulation provides guidance to district offices for communicating with sponsors, greater clarity about effective communication requirements to establish and implement agreements with sponsors would help the Corps ensure equity in its treatment of sponsors and make certain that the federal government receives expected cost share payments. To improve cost sharing for dam safety repairs, we recommend that the Secretary of Defense direct the Secretary of the Army to direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to clarify policy guidance: on the types of circumstances under which the state-of-the-art provision of the Dam Safety Assurance authority might apply to dam safety repair projects. for district offices to effectively communicate with sponsors to establish and implement cost sharing agreements during dam safety repair projects. For all dams, including the three dams named in the report, this would involve communicating estimated and final cost sharing amounts, executing agreements, and engaging sponsors to ensure cost share payment. We provided a draft of this report to the Department of Defense (DOD) for official review and comment. In its written comments, which are reprinted in appendix III, DOD concurred with our recommendations and described the actions it plans to take within the next 18 months. In response to our recommendation to clarify policy guidance on the types of circumstances under which the state-of-the-art provision of the Dam Safety Assurance authority might apply, the department stated that the ASA(CW) will clarify the usage of the provision within the next 18 months. Regarding our recommendation to clarify policy guidance for district offices to communicate with sponsors to establish and implement cost sharing agreements, DOD stated that ASA(CW) will review and clarify policy, guidance, and business practices related to communication with sponsors within the next 18 months. With respect to the three dam safety repair projects identified in our report, the department stated that the ASA(CW) will engage with their sponsors to establish a path forward to recouping the federal investment in the Corps’ work, including finalization of water supply agreements. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense, the Secretary of the Army, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 2834 or rectanusl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. The table below lists all sponsors we interviewed for this report. Not all sponsors for the dams included in our review were available for interview. Additionally, because the Southeastern Power Administration (SEPA) is a major cost sharing sponsor, we interviewed the Tennessee Valley Public Power Association, an organization that represents 155 local utilities across seven states that purchase wholesale power marketed by SEPA. In addition to the contact named above, Michael Armes, Assistant Director; Irina Carnevale, Analyst in Charge; Geoffrey Hamilton; Georgeann Higgins; Vondalee Hunt; Davis Judson; SaraAnn Moessbauer; Joshua Ormond; and Amy Rosewarne made key contributions to this report.
The Corps operates over 700 dams, which are aging and may require major repairs to assure safe operation. At some dams, sponsors that benefit from dam operations share in the cost of operating and repairing these dams based on original congressional authorizations for dam construction or subsequent agreements with the Corps. Since 2005, the Corps initiated an estimated $5.8 billion in repairs at 16 dams with urgent repair needs; sponsors are to share repair costs at 9 of these dams. GAO was asked to examine cost sharing for Corps dam safety repairs. This report examines how, over the last 10 years, the Corps (1) determined cost sharing and (2) communicated with sponsors regarding cost sharing. GAO reviewed relevant laws and Corps regulations; analyzed dam safety projects' documentation for the 16 dams the Corps selected for repairs since 2005; conducted site visits to a non-generalizable sample of three dams based on cost share determinations and range of sponsors; and interviewed Corps officials and sponsors. The U.S. Army Corps of Engineers (Corps) determined sponsors' (such as water utilities and hydropower users) share of costs for dam safety repairs pursuant to regulations, but did not apply a provision in a statutory authority that reduces sponsors' share. The Corps determined these cost shares based on analyses of the potential ways each dam could fail, and in consideration of statutory requirements regarding which type of cost sharing arrangement, or authority, would apply given these possible failure scenarios. The Corps applied its Major Rehabilitation authority at 11 of the 16 dam safety repair projects GAO reviewed for repairs associated with typical degradation of dams, such as embankment or foundation erosion through seepage. Under this authority, sponsors are to pay their full agreed-upon cost share of the repair. The Corps applied its Dam Safety Assurance authority at 5 of the 16 dam safety repair projects GAO reviewed for repairs that resulted from the availability of new hydrologic or seismic data. Under this authority, sponsors' agreed-upon cost share is reduced by 85 percent. The Corps did not apply one provision of its Dam Safety Assurance authority—related to repairs needed due to changes in state-of-the-art design or construction criteria (state-of-the-art provision)—since the enactment of the enabling legislation in 1986. Since that time, the Corps has not provided guidance on the types of circumstances under which the state-of-the-art provision applies and has not had a consistent policy position regarding the provision. For example, the Corps' latest regulation states in one section that the state-of-the-art provision will not be applied because of the difficulty in defining terminology, while another section allows for consideration on a case-by-case basis. Without clarifying the circumstances under which the state-of-the-art provision applies, and implementing the policy consistently, the Corps is at risk of not applying the full range of statutory authorities provided to it, contributing to conditions under which, as discussed below, sponsors have taken actions opposing the Corps. In GAO's review of 9 dams with sponsors, the Corps did not communicate with or effectively engage all sponsors. For example, a federal sponsor that markets hydropower generated at two dams disagreed with the Corps' decision to not apply the state-of-the-art provision of its Dam Safety Assurance authority, which, if used, would reduce this sponsor's cost share by about $410 million. This sponsor has proceeded to set its power rates in anticipation of paying the reduced cost share, creating uncertainty for the recovery of federal outlays for repairs. In addition, GAO found the Corps was not effective in reaching agreement with other sponsors on cost-sharing responsibilities at three dams because it did not have clear guidance for effectively communicating with sponsors. For example, the Corps did not engage a sponsor to ensure cost share payment at one dam and, at another dam, delayed executing agreements that would ensure sponsors' cost shares. Because the Corps did not effectively engage these sponsors, some are deriving benefits absent agreements with the Corps, while others that have agreements have not been notified of their final cost-sharing responsibility. As a result, these sponsors' cost share payments (about $3.1 million) are uncertain. GAO recommends that the Corps clarify policy guidance on (1) usage of the state-of-the-art provision and (2) effective communication with sponsors to establish and implement cost sharing agreements for all dams, including the three named in this report. The Department of Defense concurred with GAO's recommendations.
When disasters such as floods, tornadoes, or earthquakes strike, federal, state, and local government agencies coordinate to provide assistance to disaster victims. SBA, through its Disaster Loan Program, is part of this concerted effort. In the event of a disaster, SBA, the Federal Emergency Management Agency (FEMA), and other government agencies join together to conduct a preliminary damage assessment to estimate the physical damage of the disaster on the affected region. Among other criteria, if there is extensive physical damage, the governor of the affected state can request that the U.S. President declare that a major disaster or emergency situation exists, in which case federal assistance is made available to disaster victims, and FEMA takes the lead in coordinating response and recovery efforts. The presidential disaster declaration specifies the area that is eligible for federal assistance, referred to as the “immediate” disaster area in this report. In addition, SBA provides certain loans to disaster victims in the counties adjacent to the immediate area; we refer to these counties as the “contiguous” disaster area. In the immediate area of the disaster, homeowners, renters, nonprofit organizations, and nonfarm businesses of all sizes are eligible to apply for SBA loans for the repair and replacement of uninsured physically damaged property. In both the immediate and contiguous areas of the disaster, small businesses with no credit available elsewhere are eligible to apply for loans to cover economic losses. Once a declaration has been made, officials from one of SBA’s four Disaster Area Offices—located in California, Georgia, New York, and Texas—arrive at the disaster site to begin making preparations to serve disaster victims. According to SBA’s procedures, disaster loan officials secure office space— sometimes in FEMA-operated Disaster Recovery Centers for presidential declarations—and begin meeting with victims to explain the disaster loan process, issue loan applications, and, if requested, assist victims in completing applications. Appendix II summarizes the series of steps involved in accepting, reviewing, approving or declining, and disbursing disaster loans. SBA provides loans to households and businesses without credit available elsewhere at a maximum rate of 4 percent and up to a 30-year term. For households or businesses with credit available elsewhere, SBA provides loans at a maximum rate of 8 percent and, for businesses, up to a 3-year term. Business loans are available up to $1.5 million, loans for physical damage to homes are available up to $200,000, and loans for the repair or replacement of personal property are available up to $40,000. Like other federal programs, the performance of SBA’s Disaster Loan Program is reported in accordance with the Government Performance and Results Act (GPRA) of 1993. The purpose of GPRA is to shift the focus of federal management and decisionmaking from a preoccupation with the number of tasks completed or services provided to the real differences the tasks or services make to the nation or individual taxpayer. GPRA requires agencies to set multiyear strategic goals in their strategic plans and corresponding annual goals in their performance plans, measure performance toward the achievement of those goals, and report on their progress in their annual performance reports. The strategic plans, which cover a period of at least 5 years, are the starting point in setting annual goals for programs and in measuring progress toward achieving those goals. Final annual performance plans, first required for fiscal year 1999, are sent to the Congress soon after the transmittal of the President’s budget, and provide a direct linkage between an agency’s longer-term goals and mission and day-to-day activities. Related annual performance reports describe the degree to which performance goals were met. According to Office of Management and Budget (OMB) guidance, strategic goals, and performance goals in annual plans may be identical. According to GPRA, if a performance goal becomes impractical or infeasible to achieve, the agency is to explain in the performance reports why that is the case and what legislative, regulatory, or other actions are needed to accomplish the goal, or whether the goal ought to be modified or discontinued. Table 1 lists GPRA requirements for each of these documents. Both the strategic plan and the performance plan describe the relationship between a program’s goals, outputs, and outcomes. As noted previously, according to OMB guidance, outputs are the level of activity that can be produced or provided over a given period of time or by a specific date. Outcomes are the intended results, effects, or consequences that occur from carrying out program activities. In the case of the Disaster Loan Program, SBA has described the outputs as disaster loans to individuals and businesses, while program outcomes include restored housing and increased survival of businesses. OMB guidance allows agencies to divide outcomes into two categories: end and intermediate outcomes. End outcomes are the results of programs and activities compared to their intended purpose. Intermediate outcomes show progress toward achieving end outcomes. These outcomes are often required for programs when end outcomes are not immediately clear, easily delivered, or quickly achieved. OMB guidance indicates that performance plans should include measures of outcomes when the outcomes can be achieved during the fiscal year covered by the plan. Otherwise, the guidance recognizes that the performance plans will predominantly include measures of outputs rather than outcomes. In addition to OMB guidance, SBA program managers can obtain guidance in the preparation of performance goals and measures from GAO, and more recently, from an SBA primer. In the weeks and months following the terrorist attacks, SBA and the Congress faced the challenge of responding to the lingering effects of the attacks and subsequent federal actions on small businesses throughout the country. SBA responded first in Lower Manhattan, meeting with potential borrowers within 2 days of the attacks. Its response expanded as areas near the site of the attack on the Pentagon and more of the New York City area were designated disaster areas. Ultimately, SBA helped small businesses around the country with disaster lending. After small businesses raised concerns about the Disaster Loan Program’s ability to help businesses recover from the attacks, SBA and the Congress modified the program, raising loan limits and deferring interest payments, expanding eligibility for economic injury loans to small businesses around the country, modifying its size standards for small businesses, expediting its loan approval and disbursement processes, and providing translators for loan applicants. By the end of fiscal year 2002, SBA approved more than 9,700 loans for a total of $966 million to assist in the recovery efforts of September 11 victims nationwide. SBA’s response to the terrorist attacks began September 11, when SBA officials arrived in Lower Manhattan to begin coordinating the agency’s efforts. The President declared the attack on the World Trade Center a major disaster area on September 11. Unlike most of the disasters SBA had been involved in, the economic effects of the terrorist attacks were felt throughout the country. SBA’s initial disaster area in New York City and New Jersey eventually expanded to include additional counties in New York, New Jersey, Connecticut, Massachusetts, and Pennsylvania. On September 21, the President declared the Pentagon attack as a major disaster, establishing counties Maryland and Virginia and parts of the District of Columbia as disaster areas. As the United States began to deploy military personnel in response to the terrorist attacks, small businesses nationwide affected by the loss of employees called up as military reservists were eligible to apply for a disaster loan under the Military Reservist Economic Injury Disaster Loan (EIDL) program. As discussed later in this report, small businesses across the nation that were adversely affected by the lingering effects of the attacks and subsequent government action, such as airport closings and the precipitous drop in tourism, were also eligible to receive disaster loans under SBA’s Expanded EIDL program. In essence, the entire country was deemed a disaster area. As shown in figure 1, more than half the loans went to small businesses outside the area of the attack sites in New York City and at the Pentagon, with businesses in Florida and California receiving the second and third largest share of loans. In general, businesses beyond the immediate sites of the attacks received slightly more than those close by, in part because these businesses did not have the additional resources available to them that were available in New York City. As shown in figure 2, the loans were spread among industries, with no single type of business accounting for most of the funds. The manufacturing sector received the largest amount of funds. Other major industries receiving the most loan funds were professional, scientific, and technical services; transportation and warehousing; wholesale trade; and accommodation and food services. By the end of fiscal year 2002, SBA approved more than 9,700 home and business loans totaling $966 million for victims of the September 11 attacks. The agency expects to disburse $924 million—or 96 percent of the amount approved—due to loan increases, decreases, and cancellations. Individual loan disbursement amounts range from $300 to $1.5 million. Eleven percent of September 11 loan disbursements were for $50,000; the most frequently disbursed amount. Appendix IV presents more details on SBA’s September 11 disaster lending. In the weeks and months following the terrorist attacks, small business owners complained to the Congress about SBA’s Disaster Loan Program. Small business owners’ complaints, which SBA officials regarded as valuable feedback, involved issues such as: (1) the effect of the attacks on small businesses nationwide, (2) SBA’s communication with applicants with low English proficiency, (3) size standards for small businesses, (4) the loan underwriting criteria, and (5) the time required to receive loan approval. These complaints prompted SBA and the Congress to make several modifications to the Disaster Loan Program for September 11 victims, which we discuss in the following sections. Figure 3 provides a timeline of those changes; see appendix III for a summary of regulatory and statutory changes. Small businesses complained that eligibility for SBA loans was limited to firms located within the declared disaster areas, yet the September 11 terrorist attacks had caused economic injury to small businesses nationwide. Small business owners from across the nation, representing small airports as well as aircraft maintenance, travel, and tourism firms, reported losses in revenue as a result of the attacks, which forced them to furlough and/or terminate numerous employees. These small businesses identified SBA as a potential source of assistance to help them recover from the economic injury caused by the attacks. In response to these concerns, in October 2001, SBA issued regulations to make economic injury disaster loans available to small businesses nationwide, an unprecedented change to the Disaster Loan Program, according to SBA officials. SBA’s Expanded EIDL program enabled businesses outside the declared disaster areas to apply for loans to meet ordinary and necessary operating expenses that they were unable to meet, due to the attacks or related action taken by the federal government between September 11 and October 22, 2001. Small businesses in New York City also complained that the application process was particularly confusing and time-consuming for applicants with low English proficiency. To address these concerns, SBA printed informational packets in other languages, such as Spanish and Chinese, and also provided multilingual staff on-site who could speak Mandarin Chinese, Croatian, Arabic, and Spanish and was prepared to send employees with additional language capabilities to New York City. Small businesses, such as travel agencies, also argued that existing size standards—guidelines used to determine whether a firm was a small business on the basis of its annual revenue or number of employees—were overly restrictive. In February 2002, SBA modified the size standards for all September 11 loan applicants, allowing them to take advantage of recent inflation-based adjustments. In addition, in March 2002, SBA increased the threshold specifically for travel agencies adversely affected by the attacks from $1 million to $3 million annual revenues. In July 2002, SBA began to apply this increased size standard to all travel agencies, not just those affected by the terrorist attacks. In commenting on a draft of this report, SBA officials noted that the agency planned to increase the size standard for travel agencies generally, but applied that change sooner for travel agencies affected by the attacks. Small businesses affected by the terrorist attacks also complained that SBA’s underwriting criteria were too restrictive. For example, two small business owners objected to SBA’s requirement for collateral for their loans. They testified that SBA withdrew their applications because they would not use their homes as collateral. They argued that it was too risky to use their homes as collateral, especially since the survival of their businesses was uncertain. A New York Small Business Development Center official also questioned the appropriateness of SBA’s disaster loan underwriting criteria. He said that SBA should account for the location of the businesses affected by the attacks—New York City—where some factors relating to the high cost of doing business fall outside the norms. While SBA approved millions of dollars in loans, 52 percent of the loan applications were withdrawn or declined. SBA officials said that the agency makes every effort to approve each application by applying more lenient credit standards than private lenders. However, the officials said that they adhered to their credit standards to minimize losses and program costs. SBA data indicate that the 52-percent rate for withdrawing and declining September 11-related loan applications was not out of line when compared with other disasters, or with private lenders. By comparison, one bank in New York City reported a 42-percent decline rate for September 11-related loans, while another bank reported an 80-percent decline rate. The primary reasons SBA identified for withdrawing September 11 loan applications was that no Internal Revenue Service (IRS) record, which could provide independent documentation of the applicants’ income, was found, and the applicant failed to furnish additional information requested by SBA. According to SBA officials, the most common reasons for declining September 11 loan applications were the applicant’s inability to repay the loan and unsatisfactory credit. According to SBA, these are also the primary reasons that nearly two-thirds of all SBA disaster loan applications in fiscal year 2001 were withdrawn or declined by SBA. Applicants complained that the elapsed time between submitting an application and loan approval was too long. SBA responded to these complaints by implementing procedures in October 2001 to expedite two stages of the process - loan application processing and disbursement of loan funds. To expedite loan processing, loan officers calculated economic injury loan amounts based on the applicant’s annual sales and gross margin, instead of conducting a more extensive needs analysis. As of the end of fiscal year 2002, on average,SBA processed September 11 business loans in 13 days, compared with 16 days for disaster assistance business loans processed in fiscal year 20. To expedite disbursement of funds to September 11 victims in the World Trade Center and Pentagon disaster areas, SBA decreased the amount of documentation needed to disburse up to $50,000. Last, the Niagara Falls DAO made extensive use of printing selected loan documents in the field, enabling field staff to schedule loan closings within 1 or 2 days of the loan approval. SBA made initial September 11 loan disbursements within about 2 days of receipt of closing documents, compared with 3 days for initial disbursements for other disaster assistance loans, according to agency officials. See appendix II for a summary of the steps in processing SBA disaster loans. Despite SBA’s efforts to be responsive to the needs of small businesses affected by the terrorist attacks, business owners testified that SBA’s existing disaster program did not have the ability to provide adequate loans to small businesses within the declared disaster areas. In January 2002, the Congress enacted supplemental appropriations to SBA for $150 million and made several changes in the disaster loan program specifically for small businesses affected by the September 11 attacks. The changes included raising the maximum loan amount from $1.5 million to $10 million and deferring payments and interest accrual for 2 years. The Emergency Supplemental Act of 2002 also created the Supplemental Terrorist Activity Relief (STAR) Program that provided assistance to small businesses affected by the terrorist attacks through the 7(a) loan guaranty program, which is not part of the Disaster Loan Program. The 7(a) program is intended to serve small business borrowers who cannot otherwise obtain financing under reasonable terms and conditions from the private sector. Under this program, private-sector lenders provide loans to small businesses, which are guaranteed by SBA. Under the STAR program, SBA reduced the on-going fee charged to lenders on new 7(a) loans from 0.50 percent of the outstanding balance of the guaranteed portion of the loan to 0.25 percent. Although the fee reduction for lenders is the key feature of the STAR program, SBA officials anticipate that by making 7(a) loans more cost-effective for lenders, lenders will, in turn, make more small business loans and share the cost savings with their borrowers. As of the end of fiscal year 2002, SBA guaranteed about 4,700 STAR loans for $1.8 billion. (See app. III for a comprehensive list of modifications made to SBA’s Disaster Loan Program for September 11 victims.) SBA officials believed that many of the complaints about the disaster program resulted from the mismatch between victims’ expectations of SBA’s disaster program and the nature of the program. For example, when some victims were told that they could receive “assistance” from SBA, they assumed that the assistance would be in the form of grants instead of loans. SBA officials noted that the media usually does not draw distinctions among FEMA grants, SBA loans, and other forms of assistance available. SBA officials told us that they tried to minimize the public confusion about the nature of the assistance available from SBA by working closely with the media and public officials so that disaster victims would receive accurate information about SBA assistance. As stated earlier, the strategic plan describes the multiyear strategic goals. The performance plans describe the corresponding annual performance goals and the measures or indicators that will be used to assess progress in meeting them. During the past several years, we and SBA’s Inspector General have reviewed SBA’s performance plans and found the plans had significant limitations. Our review of the disaster lending portion of the 2003 performance plan found that the limitations identified in the previous reviews remain. We attribute some of these limitations to the specific nature of the measures SBA uses to describe the performance of the disaster lending program, while other limitations can be attributed to the description of program’s performance in the plan itself. In the past 5 years, SBA has used nine different measures to assess the performance of the Disaster Loan Program. Both we and SBA’s Inspector General have raised numerous concerns about these various measures in the past. The Inspector General found that SBA used inconsistent and subjective measures, and we found that the document used to report program performance to the Congress lacked key information that would have provided a more accurate picture of both the Disaster Lending Program’s performance measures and the results. We observed in our June 2001 report that SBA needed to improve the quality of the measures that it used to assess its performance. On the basis of our review of the 2003 performance plan, we have found that, as a group, the measures SBA currently uses to assess performance— the current measures (table 2, measures 4 to 9) continue to have numerous limitations, despite the guidance provided in SBA’s performance primer. First, the three output measures do not capture the notable progress the program has made in improving its loan processing; improvements that ultimately benefit disaster loan applicants and borrowers, such as better staffing processes and management of staff duties. Second, two of the three outcome measures are actually output measures and the third—a customer survey—has an important limitation. Third, other than the customer survey, SBA does not have measures to assess the intermediate or end outcomes of its Disaster Lending Program. Officials from SBA’s Disaster Area Offices questioned whether the three output measures—field presence within 3 days of a disaster declaration, processing loan applications within 21 days, and disbursing initial loan amounts within 5 days of receiving the closing documents—were appropriate indicators of timely service to disaster victims since they did not, for example, capture recent program improvements. SBA has had a 98- percent success rate in meeting the target for establishing a field presence each fiscal year since 1998. In light of this fact, one official characterized this measure as artificial and noted that it does not drive staff to improve their performance. Officials from the area offices said that improvements in planning, interagency coordination, and technology now can enable them to have staff onsite and preparing to assist disaster victims within 1 day of a disaster declaration. For example, field coordinators in two offices recently developed a database that tracks the level of staffing and other resources used to respond to various types of disasters. The coordinators used this information to help them more efficiently determine the resources required to respond to new disasters. Such preparedness enabled SBA officials to be in Lower Manhattan preparing to serve disaster victims the same day as the September 11 attacks. According to DAO staff, if there are delays in establishing a field presence, it is generally because SBA is waiting for decisions from state officials. SBA data and comments from DAO officials raise questions about the appropriateness of the second output measure—processing loan applications within 21 days of receipt (table 2, measure 5). One official suggested that providing timely, or well-timed, assistance does not always mean providing assistance in the shortest period of time. Rather, providing timely assistance means providing it when the disaster victims need it. While the 21-day measure does capture the elapsed time for multiple loan processing steps, the current target for this measure does not reflect improvements in past performance. The target was set at 70 percent for fiscal year 2000 and 80 percent for fiscal year 2001, and SBA’s performance significantly exceeded this target each year. Moreover, the actual time required for processing averaged 13 days in fiscal year 2001 and 12 days in fiscal year 2002. In fiscal year 2001, as indicated earlier, SBA’s average processing time for business loans was about 16 days. Home loans, which according to DAO officials are less complex, were processed during this period in an average of about 12 days. According to SBA data, the average processing time for both business and home loans improved in fiscal year 2002. The average loan processing time for business loans in fiscal year 2002 was about 13 days. The average time required to process the September 11 business loans was also about 13 days. The average processing time for the simpler home loans in fiscal year 2002 was about 10 days. Thus, SBA exceeded its performance target for both of these measures in fiscal year 2002. DAO officials attributed their faster processing times to several agencywide improvements that have expedited loan processing. For example, in the past SBA relied on hiring new and previously employed temporary staff to help permanent personnel to process loans. This strategy required DAO staff to train significant numbers of new temporary staff on SBA loan processing procedures, with each new disaster. In 2000, SBA implemented the Disaster Personnel Reserve Corps. Each DAO now has a list of reserve corps members who are already trained in SBA procedures and potentially available to assist in responding to disasters. According to DAO staff, utilizing the corps members enables SBA to potentially expedite processing by allowing temporary staff to begin processing loans immediately, because reservists are recruited and trained prior to the occurrence of the disaster. According to one DAO official, using the reserve corps helped her office attain the 21-day processing goal in fiscal year 2001. DAO staff also attributed faster loan processing to increased automation. Although, according to DAO staff, calculations to determine an appropriate loan amount are made electronically for all loans, some steps in loan processing are conducted manually. In 2000, SBA established the Home Expedited Loan Officer Report (HELOR) system so that loan decisions for home and personal property loans under $25,000 can be made automatically, based primarily on credit scores, rather than manually by the loan officer. DAO staff also cited DAO-level strategies that have expedited processing locally. For example, in the past, DAO staff who inspected a victim’s property to estimate the amount of property loss, referred to as loss verifiers, manually completed report forms and submitted the reports to the DAOs using a courier service. In 2002, one DAO pilot tested having their loss verifiers complete their inspection reports in the field using hand-held computers and submit their reports to DAO using electronic mail. One DAO official estimated that this automated approach reduced loan processing time and eliminated courier service expenses. In 2002, SBA began reporting data on the third output measure—ordering initial disbursements within 5 days of receiving closing documents (table 2, measure 9). Yet, DAO staff suggest that the target for this measure also does not reflect past performance and was set at a low threshold. According to DAO staff, before 2002, SBA had an internal goal of ordering disbursements within 3 days of receiving closing documents. When SBA included this measure in the performance plan, the disbursement target was increased to 5 days. SBA headquarters officials commented that the 5- day standard was set to accommodate counting weekend and holidays because the data system SBA uses to track disaster loan processing could not distinguish between workdays and non-workdays. Nonetheless, DAO officials are accustomed to the stricter 3-day standard, they indicated that the 5-day standard can be met with ease. For example, SBA made the initial disbursements on all approved September 11 loans in an average of about 2 days, and in fiscal year 2002, on average, SBA also made initial disbursements within an average of 2 days of receipt of closing documents. Moreover, according to one DAO official, the disbursement target was increased as DAOs were expediting their disbursement process. For example, as part of its response to September 11 borrowers, the Niagara Falls DAO reduced the amount of documentation required for September 11 victims from the World Trade Center and Pentagon disaster areas to receive disbursements of between $25,000 and $50,000, so that the DAO could more quickly disburse the remaining amounts. Since they found this strategy to be successful, the DAO official will recommend to his supervisors that this procedure be used for all future disasters. However, because the 5-day disbursement measure focuses only on the initial disbursement, it cannot capture other improvements that have been made to the multistep disbursement process. In commenting on a draft of this report, SBA indicated that the output measures were established based on what was determined to be a reasonable level of service based on an average year taking into account the amount of resources required. Because of the unpredictability of disasters, officials did not think it would be feasible to adjust production levels simply based on 1 year’s performance. In addition, they noted that large disasters could still generate more volume than SBA could handle quickly, especially if the pre-disaster staffing levels in all area offices were low and a large-scale recruitment and training effort were necessary. Even with some of the program improvements, they believed it would be very difficult and costly to maintain such levels during periods of multiple major disasters. Although SBA acknowledged that there may be a basis for modifying the output measures mentioned (effective field presence, processing loan applications in 21 days, and ordering initial disbursements within 5 days of loan closing), the officials believed that the modifications in the measures should be based on an average level of projected activity taking into consideration some of the permanent improvements they have made to the program. SBA officials indicated that the remaining three measures—number of homes restored to predisaster condition, number of businesses restored to predisaster condition, and customer satisfaction (table 2, measures 7, 8, and 6)—are used to assess the effect, or outcomes from lending to disaster victims. These outcome measures have limitations that are similar to the output measures. First, while the restoration of homes and businesses is a stated outcome in SBA’s strategic and performance plans, SBA does not actually measure the number of homes and businesses restored. As indicated earlier, headquarters officials said that SBA reports on the number of home loans approved as a proxy measure for the number of homes restored to predisaster condition. The agency also uses a proxy measure—the number of business loans approved—for the number of businesses restored to pre-disaster condition. The proxy measures that are used to report disaster loan outcomes have several limitations. First, these measures assess program outputs, loans approved, and not the stated outcomes—restoration of homes and businesses. Second, this proxy measure likely overestimates the number of homes and businesses restored. As SBA staff explained, even when loans are approved, borrowers might cancel the loan or reduce the amount of the loan to avoid using their homes as collateral. For example, about 10 percent of the loans approved for September 11 victims were subsequently cancelled by borrowers. Third, these indicators use annual numbers, which are not useful standards since they are highly dependent on factors outside of SBA’s control, such as the number of disasters that occur during a given fiscal year. A more useful indicator would be the percentage of homes and businesses receiving loans that were restored each year to pre-disaster conditions, which would enable a yearly comparison of performance. However, various SBA officials indicated that it is not easy to obtain evidence on the percentage of homes or businesses that have been restored after a disaster. One DAO official pointed out that though he supported conducting on-site progress inspections to measure whether homes or businesses have been restored, they are currently able to conduct on-site inspections for only a tiny fraction of the properties due to their limited travel budget. He has had to increasingly rely on the integrity of the applicants and SBA reviews of the borrowers’ receipts. Other staff indicated that some alternative strategies, such as reviewing pre- and post-disaster property tax assessments as a proxy measure for the restoration of homes, would also be problematic because of different economic conditions in different communities. To measure another outcome—customer satisfaction (table 2, measure 9)—SBA uses the results of its survey of successful loan applicants. SBA also uses this survey to evaluate the impact of the program. Yet, SBA’s method for conducting the survey has significant limitations. First, the survey measures the satisfaction of only a portion of the customers that the Disaster Loan Program serves. Every DAO director we interviewed indicated that all disaster victims are SBA customers and that a broader population should be surveyed. In 2001, we and SBA’s Inspector General made the same suggestion to SBA. As we indicated then, the current survey method is likely to produce positively skewed responses. SBA headquarters officials indicated that they are resistant to surveying those who were denied loans because they presumed the applicants’ responses would be negative. Yet, as described earlier in this report, it was the complaints from September 11 applicants that informed SBA of problems in the existing loan program and led the agency to revise the disaster program to better serve disaster victims. SBA does not currently plan to expand its fiscal year 2002 survey to a sample of all loan applicants. Second, the target set for this indicator, 80 percent, is set below what the program has reportedly achieved in the past; for example, 97 percent in 1998 and 1999, and 81 percent in 2000. Our review of the 2003 performance plan found that five of the six measures (table 2, measures 4, 5, 7, 8, and 9) that are currently used to assess the performance of SBA’s disaster lending focus on narrow program outputs rather than intermediate or end outcomes. As mentioned earlier, OMB guidance states that the plan should include outcomes when their achievement is scheduled during the fiscal year. In addition, recommendations from the Inspector General and guidance from us and within SBA have encouraged the use of outcome measures for this program. Only the customer satisfaction measure has the potential to assess one of the stated end outcomes from the Disaster Loan Program. The other intended outcomes from disaster lending, which might be measured annually or bi-annually, such as jobs retained or housing restored, are not measured. SBA may be able to measure, for those loans that are fully disbursed by the first or second quarter of the fiscal year, the percent of homes or businesses that have been fully restored at year’s end. In addition, SBA does not measure potential intermediate or end outcomes for the Disaster Loan Program. For example, as described earlier, some September 11 loan applicants criticized SBA’s underwriting criteria as too restrictive. In the past, SBA used two intermediate outcome measures, loan currency, and delinquency rates as listed in table 2, to reflect the quality of disaster loans. Yet, these measures were not included in the 2001 performance plan. Another potential intermediate outcome from the underwriting process, the retention of appropriate insurance, is not measured. As indicated in appendix II, SBA requires loan applicants to obtain insurance related to the nature of the disaster in order to receive a disaster loan. As one DAO official suggested, having insurance, such as flood insurance, potentially reduces the number of disaster loans required in areas that experience recurring disasters. As we reported previously, a greater reliance on insurance can reduce disaster assistance costs and could reduce the effect of a disaster on its victims. SBA headquarters staff said that, while they recognize that the proxy measures for the restoration of homes and businesses are inadequate and are aware that the customer survey only assesses the satisfaction of a portion of their customers, they have a limited ability to develop and use better outcome measures. The staff indicated that the very nature of disaster lending is unpredictable, so it is difficult to set performance targets for intermediate or end outcomes. A headquarters SBA official said that they are reluctant to measure and report intermediate or end outcomes that they cannot control. For example, one DAO official suggested that SBA cannot ensure that businesses that receive a disaster loan will survive. Other factors he suggested, such as differences in the willingness of people from different regions to acquire debt, will affect the borrower’s decisions. Other DAO officials indicated that conducting some end outcome measurement methodologies would be expensive, such as conducting on- site inspections of a sampling of homes and businesses to determine if they have been restored. We identified at least five features of the description of the Disaster Loan Program in the 2002 and 2003 performance plans (see table 3) that make it difficult to assess whether SBA is making progress in attaining its strategic goal. First, as discussed earlier, strategic goals and performance goals in annual plans may be identical, which is the approach SBA uses for the strategic and performance goals for the Disaster Loan Program. Between the 2002 and the 2003 performance plans, the performance goal changed from an outcome-oriented goal--helping families recover from disasters— to an output-oriented goal—streamlining disaster lending, without the required explanation. GPRA requires agencies to explain why they change performance goals, and OMB generally recommends that agencies used goals that are outcome-oriented. Second, the 2002 and 2003 performance plans do not define the linkages between each program output and each intermediate or end outcome. The plans do not explain how the outputs—disaster loans—are related to the performance indicators—field presence, customer satisfaction, and application processing timeframes. Third, the plans do not explain how the performance measures or indicators are related to either program outcomes or outputs. Fourth, the plans do not explain if the targets for the performance measures are set in anticipation of performance improving, regressing, or remaining the same. For example, some targets are at or below the actual performance in previous years. Fifth, performance indicators are added to the plans, or dropped—as shown in table 2— without explanation. These omissions make it difficult to understand how and if SBA expects to improve or sustain its loan processing performance. The performance plans also contain incomplete or inaccurate information on some performance indicators. For example, despite OMB and SBA guidance, validation and verification information on field presence and loan processing measures is omitted, making it difficult to assess the quality of performance data. In addition, the 2003 performance plan indicates that data on the number of homes restored to pre-disaster condition are based on-site inspections of homes. However, SBA officials indicated that they use a proxy measure—the number of original home loans approved—as the actual source of data for homes restored to pre- disaster condition. The September 11 terrorist attacks presented SBA with challenges it had never before faced. First, it had to provide loans to individuals and businesses near the disaster site as well as to small businesses located throughout the country. Rather than providing most of its loans for the repair and replacement of physical structures, SBA found itself dealing with large numbers of economic injury loans to businesses with amended guidelines. Second, given the extent of the economic effects in the wake of the attacks, SBA had to work with the Congress to modify the Disaster Loan Program so that larger loans could be provided to a broader population of disaster victims. Input from small business owners and advocates at congressional hearings was key to the changes that were made—changes that, whether temporary or permanent, will be useful for SBA and other federal agencies to consider in responding to future disasters. In this and previous work, we found that SBA’s Disaster Loan Program performance measures do not fully or adequately reflect the program’s actual performance. Viewing the performance measures in light of SBA’s response to the September 11 attacks underscores this finding. First, two current output measures describe only discrete steps of multistep processes, and some output measures use performance targets that have already been achieved or exceeded. Second, most of SBA’s measures assess program outputs instead of assessing measurable outcomes. We recognize the challenge of identifying end outcome measures, such as restoring a business to predisaster condition given the many factors involved in a business’ success. However, we note that intermediate outcome measures can provide meaningful information about the effect of SBA’s program. But SBA’s plan does not use intermediate outcome measures to link its output measures to the intended outcomes of the program. The one outcome measure SBA uses—a customer survey—is directed only at disaster victims who received loans. SBA misses the opportunity to get feedback from applicants who did not get loans. Yet SBA’s response to September 11 was modified partly as a result of the concerns small businesses expressed. Moreover, the limitations in the program’s performance measures and plans mean that congressional decisionmakers do not have an accurate description of SBA’s progress to help them make informed decisions in directing and funding the Disaster Loan Program. In order to better demonstrate program performance, we recommend that the Administrator of SBA direct the Office of Disaster Assistance to revise the performance measures for disaster lending to (1) include more outcome measures; (2) assess more significant outputs, such as service to applicants or loan underwriting; (3) report achievements that can be compared over several years, such as percentages; and (4) include performance targets that encourage process improvement rather than maintaining past levels of performance; revise and expand its current research to improve its measures and evaluate program impact. To improve its current measures SBA should conduct research, such as surveying DAO staff and reviewing the disaster, lending, and performance literature, to identify and test new outcome measures. To evaluate its program impact, SBA needs to revise its survey approach to survey all disaster loan applicants and to employ other methods, such as periodic analyses of regional statistics, to assess the economic impact of the program on local communities; and revise the disaster section of the performance plan to (1) establish direct linkages between each output and outcome and the associated performance measure; (2) accurately describe proxy measures as either an outcome or output measures; (3) accurately describe the validation and verification of performance measures; and (4) explain additions, deletions, or changes in the current goals or measures used from the previous year. We requested SBA’s comments on a draft of this report, and the Associate Administrator for Disaster Assistance provided written comments that are presented in appendix V. SBA generally agreed with our recommendations and said that they intended to review the existing performance measures and research new ways to evaluate program impact. SBA also provided some technical corrections and comments, which we incorporated as appropriate in this report. We are sending copies of this report to the Ranking Minority Member of the House Committee on Small Business, the Chairman and Ranking Minority Member of the Senate Committee on Small Business and Entrepreneurship, other appropriate congressional committees, and the Administrator of the Small Business Administration. In addition, this report will be available at no charge on GAO’s Web site at http://gao.gov. If you have any questions about this report, please contact M. Kay Harris, Assistant Director, or me at (202) 512-8678. Key contributors to this report were Kristy Brown, Sharon Caudle, Patricia Farrell Donahue, and John Mingus. To review the Small Business Administration’s (SBA) response to the September 11 terrorist attacks, we interviewed officials from the Office of Disaster Assistance (ODA) at SBA headquarters and officials from each of the four SBA Disaster Area Offices. In addition, we interviewed officials from SBA’s Office of the Inspector General. We also reviewed documents related to disaster lending policy and procedures, the agency’s response to the September 11 attacks, and other program documentation. In addition, we reviewed congressional testimony as well as regulatory actions taken by SBA, and legislative action by the Congress, in response to the terrorist attacks. To analyze SBA’s lending to September 11 victims, we obtained data from SBA’s Automated Loan Control System (ALCS), the system used by SBA to track disaster loan applications, approvals, and disbursements. We used these data to calculate descriptive statistics on the numbers of disaster loans, disbursement amounts, and other characteristics of the disaster lending to September 11 victims. We limited our analysis to loan funds approved through September 30, 2002. For our analysis of type of industry, we used the North American Industry Classification System (NAICS) code from the database and grouped the results by the first two letters of the code, which designate the general industry type. We determined the five industry types that received the largest percentage of SBA September 11 loans nationwide, grouping the remaining industries in the “other” category. We conducted similar analysis by industry for each type of September 11-related declaration. We ascertained how information for the ALCS database was collected and maintained to determine its reliability, and we found the information to be reliable for our purposes. We repeatedly consulted with SBA headquarters officials, including those responsible for managing ALCS, during our analyses to ensure our understanding of various data elements was correct. We also obtained summary statistical reports from SBA describing disaster lending during fiscal years 2001 and 2002. To review and analyze SBA’s performance plans and measures for its Disaster Loan Program, we reviewed SBA’s strategic plan for the 2001-2006 period and performance plan for fiscal years 2002 and 2003. A knowledgeable staff member from our Strategic Issues Team also reviewed the plans for compliance with the Office of Management and Budget’s (OMB) guidance on the Government Performance and Results Act (GPRA) of 1993 guidance. We also reviewed SBA’s Inspector General’s recent review of the disaster section of recent performance plans, SBA’s primer on performance measurement, and our recent reviews of SBA. Our overall assessment of SBA’s performance plans was generally based on our knowledge of the Disaster Loan Program and OMB’s guidance on developing strategic and performance plans. We conducted our work between June 2002 and January 2003 in Washington, D.C.; Niagara Falls; Atlanta; and Fort Worth in accordance with generally accepted government auditing standards. State and federal officials conduct a preliminary damage assessment to estimate the extent of the disaster and its impact on individuals and public facilities. SBA participates in the damage assessment when the damages include homes and businesses. The President, USDA, or SBA makes a disaster declaration. SBA establishes field presence – SBA staff arrive at the disaster site and take actions to initiate delivery of disaster assistance. SBA loan officers meet with disaster victims, explain the loan process, and issue applications at the Federal Emergency Management Agency (FEMA) or SBA disaster offices. SBA screens the submitted applications for completeness and to make sure all necessary documentation has been provided. -Home loan application package includes the application, listing of property damage, and authorization for SBA to access applicant’s tax information. -Business loan application package includes the application, a schedule of liabilities, and personal financial statements and tax information authorization for each proprietor, partner, affiliate, or other type of owners. Physical loan applications are forwarded to loss verifiers who conduct on-site appraisals of the damaged property to estimate the cost of restoring the property to pre-disaster condition. Economic injury applications may be sent directly to a Disaster Area Office (DAO) for processing. Once the application arrives at the DAO, SBA staff review the application, examining such issues as duplication of benefits; credit history; criminal record; tax returns; history on other SBA loans; and the history on other federal debt. The applicant’s losses or economic injury are calculated. The loan officer determines whether the applicant has satisfactory credit and the ability to repay the loan; the legal department determines whether there are any legal or regulatory restrictions on receiving a disaster loan. If the applicant meets SBA’s underwriting criteria, then the loan is approved, using the amount of verified losses as the basis for the loan amount. Closing documents are prepared and mailed to the applicant. Applicants are required to obtain insurance. Hazard insurance is required before disbursement over $10,000 for physical loans, and over $5,000 for economic injury loans. Flood insurance is required for properties located in Special Flood Hazard areas before any disbursement can be made. Maximum initial disbursement without collateral: physical loans - $10,000; economic injury loans – $5,000 Initial disbursement with collateral, preferably the applicant’s home: $25,000. Total disbursements with proof of ownership of the damaged property: physical loans and economic injury loans - $25,000. Total disbursements with proof of title insurance: physical loans and economic injury loans - $250,000. For small businesses in declared disaster area, maximum disaster loan amount is $1.5 million, small non-profit institutions and select financial maximum disaster loan amount is $10 million, small nonprofit institutions and select financial and insurance firms were ineligible for economic injury assistance, and insurance firms eligible for economic injury assistance, interest begins to accrue when disbursement is no interest accrues for 2 years following payments of principal and interest are deferred payments of principal and interest are deferred for 4 months. for 2 years following issuance. For 7(a) lenders: 7(a) loan fee = .50 percent of outstanding balance. For 7(a) lenders to small business adversely affected by attacks: 7(a) loan fee = .25 percent of outstanding balance. Economic injury loans available only to small businesses within the declared disaster area, directly affected by the disaster. Economic injury loans available to small businesses nationwide, adversely affected by the disaster or by related action taken by the federal government. Inflation-adjusted size standards, generally effective February 2002, were effective September 11, 2001, for businesses applying for economic injury loans as a result of the terrorist attacks. Threshold for “small” travel agencies is $1 million in annual revenues. For World Trade Center and Pentagon areas, businesses do not have to sustain physical months of gross margin, with maximum loan amount the lesser of (1) 3 times the SBA verified physical loss or (2) $100,000. if business in operation, economic injury loan amount based on up to 3 months gross margin, with maximum loan amount of $200,000; and if business not in operation, economic injury loan amount based on up to 6 months gross margin, with maximum loan amount of $350,000. For EIDL applicants nationwide without any property damage, The loan amount was limited to the lesser of 2 months gross margin or $50,000. Expedited disbursement process Disbursements greater than $25,000 require a title search. For World Trade Center and Pentagon areas, title search is not required for disbursements up to $50,000. SBA’s response to the September 11 disaster commenced immediately after the terrorist attacks occurred, when SBA disaster officials established communication with FEMA and state emergency management officials. By the afternoon of September 11, disaster officials from SBA’s Niagara Falls DAO were in Lower Manhattan coordinating the agency’s recovery efforts with the overall federal response. Once the President declared the World Trade Center attack a major disaster, SBA designated the immediate disaster area of the World Trade Center (“WTC Immediate”) as the five boroughs of New York City, and the contiguous area of the World Trade Center (“WTC Contiguous”) as including two other counties in New York and four counties in New Jersey. SBA officials began meeting with disaster victims on September 13. Following the President’s declaration of the Pentagon attack as a major disaster on September 21, SBA established the immediate area of the Pentagon, which was comprised of Arlington County, Virginia, and the contiguous area of the Pentagon, which included additional counties in Maryland, and Virginia (“Pentagon Contiguous”), and parts of the District of Columbia. FEMA extended the declared disaster areas on September 27 as the widespread impact of the terrorist attacks became more apparent. The immediate area of WTC was extended to include 10 additional counties in New York, including the 2 counties initially included in the WTC Contiguous area. The extension also added additional counties in New York and New Jersey, as well as counties in Connecticut, Massachusetts, and Pennsylvania to the existing WTC Contiguous area. See figure 4 for a map of the disaster areas. As the United States began to deploy military personnel in response to the terrorist attacks, small businesses affected by the loss of employees who serve as reserve military personnel were eligible to apply for a disaster loan under the Military Reservist EIDL Program. We obtained and analyzed SBA data on the loans it approved in response to September 11, 2001, through September 30, 2002. The distribution of September 11 lending varied significantly by amount, geographic location of recipients, and the types of loans. Nearly half of the September 11 loan funds disbursed by the end of fiscal year 2002 was distributed to disaster victims from New York. The balance was disbursed across the country through the expanded EIDL Program. Unlike other recent disasters, almost all of the disbursed loan funds went to businesses rather than homeowners. In just over 1 year, SBA approved more than 9,700 home and business loans totaling $966 million for victims of the September 11 attacks, disbursing about $895 million, or 93 percent, by the end of fiscal year 2002. The peak in monthly disbursement amounts for all September 11 loans was in January 2002 at $120 million. The agency expects to fully disburse $924 million—or 96 percent of the amount approved—due to loan increases, decreases, and cancellations. As of the end of fiscal year 2002, about 10 percent of approved September 11 loans were cancelled by borrowers, compared with 16 percent of approved disaster loans in fiscal year 2001. The greatest percentage of loan cancellations occurred in the immediate area of WTC, where 13 percent of the loans in this area were cancelled. The contiguous area of the Pentagon experienced the greatest percentage of loan increases, where 11 percent of September 11 loans were increased from their original approved amount. Given the difference between the approved amounts and the disbursed amounts—due to loan increases, decreases, and cancellations—we have chosen to describe the distribution of September 11 loans in terms of the actual disbursed loan amounts. September 11 loan disbursement amounts range from $300 to $1.5 million, with a median amount of $50,000. Fifty percent of disbursements were between $18,700 and $119,700. Eleven percent of September 11 loan disbursements were for $50,000, the most frequently disbursed amount. In commenting on our draft SBA, indicated that the agency applied the expedited EIDL process for “stand-alone” EIDLs, that is, applicants without any property damage. The loan amount was limited to the lesser of 2 months gross margin or $50,000, which SBA described as the reason why the most commonly disbursed amount was $50,000. The distribution of September 11 loans also varied by state, type of loan, declaration area, and by business industry. Typically, about 80 percent of approved SBA disaster loans are home loans to repair physical damage to homes and personal property. However, about 97 percent of September 11 loans were disbursed to businesses. Even in New York City, only 6 percent of loans were disbursed to households. SBA officials attribute this difference from the historic lending pattern to the fact that the physical damage caused by the terrorist attacks was concentrated in the World Trade Center business district and at the Pentagon. Seventy percent of the businesses receiving September 11 loans had 10 or fewer employees, while 50 percent had 5 or fewer employees. Businesses with more than 100 employees received less than 2 percent of disbursed loan funds. Overall, only about 9 percent of September 11 loan applicants in the declared disaster areas sustained physical losses compared with about 80 percent of disaster loan applicants in fiscal year 2001. Consequently, 92 percent of September 11 loans went to small businesses that suffered economic injury, but no physical damage, and about 5 percent of the loans were disbursed to businesses with physical damage from the attacks. Although SBA provided loans to affected small businesses nationwide, about 45 percent of all disbursed September 11 loan funds were distributed to applicants in New York State. Of that 45 percent, approximately 36 percent was disbursed to disaster victims in New York City. As shown in figure 1, Florida received the second greatest percentage of disbursed September 11 loans (11 percent), followed by California (6 percent), New Jersey (4 percent), Texas (3 percent), and Virginia (3 percent). More than half of all September 11 loan funds were disbursed to small businesses outside of the immediate and surrounding areas of the World Trade Center and the Pentagon. SBA data indicate that, in general, businesses located closest to the WTC disaster site received smaller loans than businesses near the Pentagon and nationwide. For example, the median disbursement in the immediate area of WTC, specifically New York City, was about $40,000, while the median disbursements under the expanded EIDL Program and in the area of the Pentagon were $50,000 and $60,000, respectively. SBA disaster officials reasoned that firms near WTC may have received smaller SBA loan disbursements because there were other resources available to them, whereas SBA was the sole source of assistance for affected small businesses outside of New York City. In addition, SBA officials suggested that since many September 11 loan recipients in New York City were service-oriented firms, they had fewer operating expenses than the more capital-intensive loan recipients nationwide. SBA loan disbursement data appear to indicate that a wide variety of businesses received September 11 loans. As shown in figure 2, no one sector of the economy received a substantial portion of these loans. We summarized SBA’s loan data according to the type of business that received the loan. The manufacturing sector received the greatest percentage of September 11 loans, though this represents only about one-sixth of these loans. We combined business types with less than 7 percent of the loans into an “other” category, which includes such sectors as retail trade and waste management. As shown in figure 6, the distribution of the loan disbursements by industry for the expanded EIDL was similar to the distribution for all September 11 loans, with the manufacturing sector receiving the second largest portion of these loan disbursements. In contrast, to the distribution of loan disbursements at the national level, the greatest percentage of disaster loan funds in New York City, and the immediate and contiguous areas of the Pentagon was disbursed to the professional, scientific, and technical service industry. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
The September 11 terrorist attacks and subsequent federal action had a substantial impact on businesses in both the declared disaster areas and around the nation. In the aftermath of the attacks, the Congress, among other actions, appropriated emergency supplemental funds to the Small Business Administration (SBA) to aid September 11 victims. Given the uniqueness of this disaster and changes in the program, GAO analyzed SBA's lending to September 11 victims, as well as the loan program's performance goals and measures. As part of its response to the September 11 terrorist attacks, SBA modified several aspects of its Disaster Loan Program and its processes. For example, SBA increased the maximum loan amounts available and decreased the amount of documentation required for certain loans. By the end of fiscal year 2002, approximately $1 billion in loans had been approved for victims of the attacks. On average, SBA processed business loans to September 11 victims in an average 13 days compared with 16 days for business loans to other disaster victims in fiscal year 2001. Like other federal programs, SBA has developed a multiyear strategic goal for the Disaster Loan Program--helping families and businesses recover from disasters--and has developed annual goals and measures to assess its yearly progress toward attaining their strategic goals. GAO reviewed the measures and found that they have numerous limitations. For instance, these measures do not capture the notable progress the program has made in improving its loan processing--progress that ultimately affects disaster loan applicants and borrowers. The inadequacies of SBA's measures are especially evident when considered in light of the agency's performance in responding to the September 11 terrorist attacks. GAO attributes some of these limitations to the nature of the measures SBA uses to describe the performance of the Disaster Loan Program, while others can be attributed to the description of the program's performance. Without better performance measures and plans, the Congress does not have an accurate description of SBA's annual progress toward helping Americans recover from disasters.
The ISS⎯the largest orbiting man-made object⎯is being constructed to support three activities: scientific research, technology development, and development of industrial applications. Its facilities allow for ongoing research in microgravity, studies of other aspects of the space environment, tests of new technology, and long-term space operations. Its facilities enable astronauts to conduct many different types of research, including experiments in biotechnology, combustion science, fluid physics, and materials science, on behalf of ground-based researchers. The ISS also has capability to support research on materials and other technologies to see how they react in the space environment. In general, conducting research in a microgravity environment allows scientists to eliminate the influence of Earth’s gravity and can result in discoveries of properties and reactions that would be masked on Earth. Some researchers believe that conducting scientific experiments in microgravity can yield potentially groundbreaking results in areas as diverse as stem- cell culturing, vaccine research, plant and seed research, and targeting drug-resistant microbes. Testing materials and technologies in space allows researchers to determine the impact of the harsh space environment on these items for potential future use in space vehicles or satellites. There are five main partners involved in supporting the development and manning of the ISS: the United States, Russia, Japan, ESA (which includes a number of participating countries), and Canada. The ISS consists of two separately administered (though conjoined) parts: (1) the U.S. operating segment (USOS), with contributions from its international partners (ESA, JAXA, and the Canadian Space Agency (CSA)), and (2) the Russian segment. Russian research is separate from the USOS operations: Russia has no utilization rights to U.S., European, or Japanese modules and NASA has no utilization rights to Russian modules, though NASA told us there are mechanisms for scientific collaboration and hardware sharing among all agencies. According to NASA, it provides a portion of ISS resources (including crew time, facilities, and launch capabilities) to the partners based on international agreements with each partner in exchange for its contributions to the ISS. Each partner facility has research accommodations that can be used and shared among the partners as stipulated in the agreements. Scientific research facilities currently available inside the ISS are generally mounted in modular, refrigerator-sized mounts called racks or ExPRESS racks, which provide the utilities necessary for conducting research, including electricity. Each rack contains lockers, drawers, or other inserts that can be used to install research payloads and are changed as necessary. The racks may also contain semipermanent equipment, such as freezers, incubators, or glove boxes. Research payloads are sent to the ISS in a flight-certified piece of hardware that may be small in size. This hardware is generally then installed in one of these racks, and the experiment is operated until the research is completed. Once completed, the payload may be returned to Earth for analysis or research data are transmitted back to Earth for analysis. Research can also be conducted on the exterior of the station in unpressurized facilities; for example, the Materials International Space Station Experiment is conducted in such facilities. Facilities on board the ISS and NASA’s plans for its own utilization of the ISS have changed over time. When NASA adopted The Vision for Space Exploration (Vision) in 2004, it set forth a plan to explore space and extend a human presence across our solar system with dual goals of returning humans to the moon by 2020 and later sending humans to Mars and other destinations. It also dictated that NASA focus its research efforts on board the ISS on its Human Research Program supporting future human space exploration, including studying the effects of the space environment on humans; on technology development and test for exploration; and on developing operational protocols for successful long- duration space operations. Though ISS had originally been intended to be a broad-based research facility, the Vision required NASA to focus its ISS research on supporting space exploration goals with an emphasis on understanding the impacts of the space environment on astronauts and developing countermeasures to these effects. As a result, NASA reduced the scope of its ISS research; the agency conducted a zero-based review in the fall of 2005 and determined that some fundamental life and physical sciences tasks were not “highly relevant” to achieving the goals of the Vision. The agency canceled some existing grants in this area and stopped soliciting any new research, which caused affected ISS scientific research communities to shrink or turn to other research areas. NASA also reassigned its personnel involved with the fundamental sciences, including space biology (such as animal, plant, and microbial research), and reduced its portfolio of research on fluid physics, combustion, materials science, biotechnology, and fundamental physics. Table 1 depicts some changes in ESMD flight research conducted in 2002 and 2008 that illustrate the redirection of focus. Hardware needed for research projects was also canceled or delayed by NASA or commercial developers, either because of the change in research priorities or other constraints, such as the pause in shuttle flights after the loss of the Space Shuttle Columbia. This included animal research facilities, the Life Sciences Glovebox, the Centrifuge Accommodation Module, and the Alpha Magnetic Spectrometer (AMS). In 2003, the National Research Council and the National Academy of Public Administration reported that NASA drastically reduced the overall ability of the ISS to support science, and that this reduction limited or foreclosed the scientific community’s ability to maximize the research potential of the ISS. NASA’s Plan to Support the Operations and Utilization of the International Space Station Beyond 2015 states that it would cost several billion dollars to reinstate the full scope of planned ISS facilities. Though the Vision changed and reduced the scope of NASA’s goals for its own research on board the ISS, Congress designated the ISS as a national laboratory in 2005 in an effort to increase utilization of the ISS for research. Congress also asked NASA to seek to increase utilization of the ISS by other federal entities and the private sector through partnerships, cost-sharing agreements, and other arrangements that would also supplement NASA funding of ISS research. According to NASA officials, this designation does not guarantee an appropriation specifically for ISS National Laboratory research. The ISS National Laboratory operates in conjunction with the ISS research programs of NASA and the international partners, and utilizes a portion of the USOS resource allocation, including crew time, facilities, and cargo launched to the station. As such, NASA conducts the research it sees as relevant to its mission, and the ISS can also accommodate users from outside of NASA who are not necessarily conducting research relevant to NASA’s Human Research Program or other NASA-sponsored research. NASA established the ISS National Laboratory Office in the spring of 2009; this office is part of the existing Space Station Payloads Office and as of April 2009 had five staff members. In May 2009, President Obama established the Review of U.S. Human Space Flight Plans Committee. Its stated goal is to provide an independent assessment of the nation’s planned human spaceflight activities and to ensure that that country is on “a vigorous and sustainable path to achieving its boldest aspirations in space.” The committee conducted an assessment of NASA’s plans, including plans for the ISS, and developed a number of possible options for the future of the U.S. space activities. In its summary report released in September 2009, the committee developed five options for NASA’s human spaceflight program, and of these options, three recommend extending the lifespan of the ISS until 2020. The committee wrote that it would be unwise to de-orbit the ISS after 25 years of design, development, and assembly and only 5 years of operations, and that the return on investment to both the United States and the international partners would be significantly enhanced by an extension of the ISS’s life. It is unknown at present which option will ultimately be selected, but the future utilization of the ISS depends on this decision. The ISS has been continuously manned since 2000, and in March 2009 the crew expanded from three to six. NASA’s primary objective for the ISS through 2010 is construction, so research has not been the main priority. Specifically, though the ISS facilities have been used for some research to date, new research capabilities are still being added and are awaiting launch and installation, and resources such as crew time, transportation, and facilities planned for the utilization phase have not been fully available. As such, research is being conducted at the margins of assembly and operations activities as time permits, while the crew on board performs assembly and operations tasks. NASA has identified 197 U.S.- integrated investigations that have been conducted on orbit as of April 2009, though 55 of these investigations were conducted on the Space Shuttle missions to the ISS instead of on the ISS itself (called sortie research). According to NASA, as of February 2009, U.S. ISS and sortie research have resulted in over 160 publications, including articles on topics such as protein crystallization, plant growth, and human research. According to NASA, there have also been approximately 25 technology demonstration experiments flown on the ISS during the assembly phase. Once construction is completed, NASA projects that its share of the ExPRESS racks will be less than 50 percent occupied by planned NASA research related to the Human Research Program and other NASA- initiated research, with the remainder available for other use. Any facilities that NASA does not plan to utilize are available to the ISS National Laboratory, and the system is flexible so that future rack space can be made available either to NASA-funded or ISS National Laboratory users up to the total capacity. These projections are based on NASA’s current ISS research budget and determinations of available resources based on the percentage of ISS resources that are allocated to NASA and the international partners according to established international agreements. Table 2 depicts the NASA projected occupancy of rack space for September 2010. Inside the ISS, there are many available interior, or pressurized, sites for research racks and other facilities, though not all available sites will ultimately accommodate a facility. NASA projects that 79 percent (19 of 24) of the available NASA internal payload sites that can accommodate research facilities ultimately will, and that less than 50 percent of these facilities will be occupied by planned NASA research after the ISS is completed, making them available for other users. The ISS also has external, or unpressurized, sites exposed to the vacuum of space on its exterior structure that can accommodate research facilities. NASA projects that these sites will be 33 percent (7 of 21) filled with research facilities when assembly is completed and 62 percent filled (13 of 21) by the end of 2015. NASA’s international partners are fully utilizing their ISS allocations; ESA needs more resources than it has been allocated by the international agreements. NASA officials told us that their intention was to build the ISS with sufficient research facility capacity so that they could invite the broader scientific community to use the ISS; they added that had NASA intended to use the ISS to support only its own research, the agency could have truncated construction and utilized 100 percent of its facilities. NASA officials told us that they expect to be able to fill the surplus ISS capacity with research by National Laboratory users. NASA faces several significant challenges that may impede efforts to maximize research utilization of the ISS, including (1) the impending retirement of the Space Shuttle in 2010, reduced launch capabilities once the shuttle retires, and the potential for a gap between retirement and follow-on U.S. vehicles; (2) high costs for launches and developing research hardware and a lack of dedicated funding streams for ISS research; (3) limited crew time available for research due to a fixed crew size and other requirements for crew time; and (4) an uncertain future for the ISS beyond 2015. The Space Shuttle is currently slated to retire in 2010, and as of November 2009 only five launch opportunities remain. We have previously reported that the ISS will face a significant cargo supply shortfall without the Space Shuttle. Further, since NASA has the few remaining Space Shuttle flights scheduled to carry equipment required for assembly, operations, and maintenance, there may be limited cargo capacity for research payloads. Potential researchers and others have told us that they have faced difficulty in getting payloads scheduled on board the Space Shuttle in a reasonable amount of time. Following the retirement of the Space Shuttle in 2010, NASA will rely on an assortment of vehicles in order to provide the necessary logistical support and crew rotation capabilities required for the ISS, but none will offer the same cargo capabilities as the Space Shuttle in upmass (delivering cargo to the ISS) and downmass (delivering cargo to Earth). NASA will rely heavily on Roscosmos⎯the Russian Federal Space Agency⎯and its launch vehicles to provide crew transport to the ISS once the Space Shuttle retires, and has signed agreements for future service. Some of the other vehicles are already supporting the ISS, while the international partners, the commercial sector, and NASA are developing others. As we have previously reported, NASA expects Russia to launch six Progress flights each year from 2009 through 2011, and that NASA cargo will be spread across the equivalent of four Progress flights in 2009, two in 2010, and one in 2011. NASA currently does not plan to utilize the Progress vehicle beyond 2011. International partners’ vehicles alone cannot fully satisfy ISS cargo needs. Existing and planned international partner vehicles have much less upmass capability than the Space Shuttle and no downmass capability for research payloads. Overall, NASA now faces a 40-metric ton (approximately 88,000 pound) usable cargo shortfall from 2010 through 2015. To mitigate this shortfall, NASA has turned to commercial developers to provide launch vehicles. These vehicles are known as Commercial Orbital Transportation Services (COTS) vehicles, and two companies, Orbital Science Corporation (Orbital) and Space Exploration Technologies Corporation (SpaceX), are each developing future vehicles. The Russian Soyuz vehicle can transport downmass (though minimal) and return crew from the ISS after the Space Shuttle is retired, and the new commercial SpaceX vehicle is also expected to be able to return downmass. Delay of downmass capability will make it difficult to transport research back to Earth for analysis. Table 3 provides specifics on the available and planned vehicles. As we have previously reported, the contractors responsible for the COTS vehicles have experienced delays in demonstration milestones and are at risk for further delays. Both SpaceX and Orbital have had schedule slippage in the development of their launch vehicles. For SpaceX, this has contributed to anticipated delays of 2 to 4 months in most of its remaining milestones. Orbital has recently revised its agreement with NASA to demonstrate a different cargo transport capability than it had originally planned, and delayed its demonstration mission date from December 2010 until March 2011. We have also previously reported that there have been delays with the development of the Constellation program, and that there were likely to be further delays that would make achieving NASA’s 2015 first crewed launch date difficult. We have noted that a delay in the availability of commercial partners’ vehicles in 2010 would lead to a significant scaling back of NASA’s use of the ISS for scientific research; however, NASA officials told us that they believe recent developments (for example, the addition of a Space Shuttle flight) have shifted the horizon for serious impacts from COTS delays into 2011. NASA officials said that the impact of COTS failures or significant delays would be similar to the post-Columbia disaster scenario, where NASA operated the ISS in a “survival mode” and moved to a two-person crew, paused assembly activities, and operated the ISS at a lower altitude to relieve propellant burden. NASA officials stated that if the COTS vehicles are delayed, they would pursue a course of “graceful degradation” of the ISS until conditions improve or until NASA’s commitment to operate the ISS expires at the end of 2015. In such conditions, the ISS would only conduct minimal science experiments. NASA officials told us that they are basing logistics requirements for the ISS on engineering estimates for component reliability, but will not know the full accuracy of these estimates until further operating experience is gained. NASA has current plans to use 50 percent of the United States’ allocated launch capacity to transport research cargo to the ISS and 47 percent of the United States’ allocation to transport research cargo returning to Earth for postflight analysis (not including operational cargo). However, these projections may change, and are based on the assumption that all follow-on and replacement launch vehicles will begin operations as scheduled; significant delays or new NASA requirements to provide logistics and resupply cargo have the potential to alter this projection and, as noted, may result in cargo shortfalls and potentially the scaling back of ISS research. ESA already wants to launch more research cargo to the ISS than it is allotted under international agreements. NASA’s planning document states that ESA will have a demand of 1.8 metric tons of cargo beyond its allotment that it wants to send to the ISS. NASA officials have stated that it is significantly more expensive to conduct research on board the ISS than on Earth and the agency now views lack of funding for research as the major challenge to full research utilization of the ISS. According to NASA, one of the major cost drivers is the cost to launch payloads to the ISS. When the Space Shuttle retires, Roscosmos and later the commercial launch partners will be able to set the launch costs. Costs to the user of the ISS vary: NASA signed a memorandum of understanding (MOU) with NIH as an ISS National Laboratory user to launch biomedical experiments to the ISS, and NASA officials have stated that the agency will work with NIH to determine the demand for launch services and accommodate NIH payloads on the margins of NASA operations and maintenance flights as space allows. However, NASA officials told us that the agency has set no money aside for ISS National Laboratory payload development or transportation, and it may be unable to provide complimentary launch opportunities to National Laboratory users. We asked NASA for launch cost estimates; officials gave an estimate of $44,000 per kilogram (about 2.2 pounds), along with the caveat that the costs to develop and launch experiments vary widely depending on the experiment. Researchers we spoke with gave higher estimates for payload costs. USDA reported that the average payload cost for its experiments, which were individually contained in a compartment the size of a shoe box, was about $250,000. Though specific figures will vary depending on the nature of the payload, these types of costs may be prohibitive to researchers who are responsible for seeking their own funding. According to NASA officials, the National Laboratory designation does not guarantee an appropriation specifically for ISS National Laboratory, and it is unclear if NASA or other federal agencies will be able to provide any funding support to facilitate ISS utilization. NASA regards this lack of dedicated funding as the current main limiting factor for utilization of the ISS. One positive indication came from NIH, which issued a funding announcement indicating that it may make funding available for selected applicants. Researchers we spoke with agreed that funding opportunities or grants are irregular and limited, and that regular funding opportunities are essential for attracting researchers to any science program. NASA officials told us that funding for ISS research had been $700 million in 2002 and is now approximately $150 million annually. According to NASA this reflects a shift in budget priorities from funding research on the ISS to developing the Constellation program. NASA also ranks limited crew time as a significant constraint for science on board the ISS. The size of the crew on board the station is constrained at six by the number of spaces available in the “lifeboats,” or docked spacecraft that can transport the crew in case of an emergency. As such, at present crew time cannot be increased to meet increased demand. Further, crew time is shared between NASA and its international partners (JAXA, ESA, CSA, and Russia). According to NASA, the ISS crew members work 8.5 hours a day, and during this time they conduct maintenance, vehicle traffic operations, training, medical operations, human research experiments, and the experiments of NASA and the international partners. NASA documentation shows that the remaining crew time will be spent eating, sleeping, and exercising. Figure 1 depicts the crew time allocations among NASA and its international partners; it also depicts the percentages of crew time available to NASA and its international partners as negotiated in agreements. According to NASA, the USOS is allocated half of the crew time available on the ISS, with the other half going to the Russian segment. NASA told us that it and the international partners (excluding Russia) will have 35 hours per week of scheduled crew time to share in conducting research. As shown in figure 1, NASA’s share of crew time will be approximately 27 hours per week to devote to research; of this time, NASA plans to use 56 percent for its own Human Research Program studies. The remaining 44 percent (or approximately 12 hours per week) will be available for other NASA research and National Laboratory investigations. Though available crew time may increase as the six-person crew becomes more experienced with operating the ISS efficiently or if the crew volunteers its free time for research utilization, crew time for U.S. research remains a limiting factor in that it cannot be scaled up to meet demand. According to NASA officials, potential National Laboratory researchers should design their experiments to be as automated as possible or minimize crew involvement required for their experiments to ensure that they are accepted for flight. For example, NASA told potential NIH grant applicants that an experiment requiring 75 hours or more of crew time over one 6-month period would be too intensive and would likely be rejected, though according to NASA no investigation to date has required that much crew time. Not all ISS research will require much crew intervention or be constrained by available crew time. Areas such as technology development may require less crew intervention; for example, the Materials International Space Station Experiment mounts samples on the exterior of the ISS and once set up requires little crew intervention. NASA’s budget currently reflects plans for retirement of the ISS at the end of 2015. The Review of Human Space Flight Plans Committee has proposed extension of the ISS until 2020 in three of its five possible scenarios and Congress has directed NASA to take steps to ensure that it remains capable of remaining a viable and productive facility for the United States through at least 2020, but there has not been a commitment yet to continue operations. If not extended, there will be only 5 years between the end of construction in 2010 and ISS retirement in 2015 to utilize the ISS research facilities. Under this deadline, the potential for long-term science and for building a robust ISS user community is limited. The uncertainty of the ISS program beyond its 2015 retirement date has deterred members of the scientific community from considering the station as a platform for fundamental research. According to researchers, they require sufficient time (months to years) to develop and conduct an experiment and then to replicate their research so they can seek publication in peer-reviewed journals. Officials from each of the other science programs we studied and many researchers we spoke with commented on the importance of having a program with a reasonable and definitive window of available time for scientists and graduate students to fully develop and implement their experiments. They added that having longevity in a research program ensures that prospective and current users, whether academic or commercial, will have an opportunity to work in a viable laboratory where they can invest in their research. Researchers have told us that they may be unlikely to get involved with ISS research if they do not have assurances that the ISS will be around for long enough for them to get their research developed and executed. They emphasized that by knowing they have plenty of time to conduct their experiments, they have not only the time to teach the next generation of scientists—that is, graduate students whose dissertations rely on the completion of research projects—but also the opportunity to reproduce their experiments. Publishing research results, a requirement for many academic scientists, often requires that results can be duplicated, which may not be possible on board ISS if the research utilization window is only 5 years. NASA’s international partners are using their research facility allotments and two have recently expressed interest in extending the operation of the ISS beyond 2015. The Director General of ESA told the Review of U.S. Human Space Flight Plans Committee that he believed that the decision about the future of the ISS should be a joint decision of all the partner nations, and that if ISS research utilization is not successful, the program would be a failure. Similarly, the head of Roscosmos advised the United States to prolong operation of the ISS beyond 2020. Retirement of the ISS is in part predicated on the life of its components. NASA’s plan for operating and using the ISS for research through 2020⎯required by the NASA Authorization Act of 2008⎯states that while some of the ISS’s hardware was originally designed for a 30-year life, most was tested to the 15-year life requirement, meaning that there are unknowns that prevent providing an absolute definition of the lifetime capability of the ISS, and that additional testing and analysis is required. We did not assess the technical issues surrounding an extension of ISS operations. In addition to the transportation issues, high costs and limited funding, and limited crew time⎯challenges exacerbated by the possibility of retirement of the ISS in 2015⎯NASA may face challenges in the management and operation of ISS National Laboratory research. There is currently no direct analogue to the ISS National Laboratory, and though NASA currently manages research programs at the Jet Propulsion Laboratory and its other centers that it believes possess similar characteristics to other national laboratories, NASA has limited experience managing the type of diverse scientific research and technology demonstration portfolio that the ISS could eventually represent. If utilized to its full capabilities, the ISS research program could cross multiple research disciplines and involve researchers from the academic, governmental, and commercial sectors, management of which may be outside of NASA’s core competencies. We studied other national laboratories and large, multidisciplinary science programs to learn how they are managed and to identify possible lessons learned that could be applicable to management of the ISS. We visited Brookhaven and Argonne National Laboratories and spoke with officials from several other large science programs, including the National Energy Technology Laboratory, DOE’s only government-owned, government-operated (GOGO) national laboratory; the Space Telescope Sciences Institute, which is a nonprofit science center that works for NASA to coordinate research for the Hubble Space Telescope and forthcoming James Webb Space Telescope; the NSF Office of Polar Programs, which manages research conducted in the Arctic and Antarctica; and WHOI, a private, nonprofit institute that conducts, coordinates, and supports a range of oceanographic research onboard three large research ships, one coastal vessel, and submersible vessels. We identified three common practices that may be applicable to whatever management structure NASA decides on for managing all U.S.-sponsored ISS research: central management of research, robust in-house technical expertise, and significant user outreach. NASA has recognized the potential value of national lab practices—particularly engaging an outside partner for laboratory management. At the research institutions we studied, we found that each has a management structure that typically entailed a contractor or nonprofit consortium of universities that oversee the operation of the laboratory and that researchers deal directly with that management body to initiate and develop their research. For example, Brookhaven and Argonne are federally funded research and development centers (FFRDC) and operate as government-owned, contractor-operated (GOCO) facilities. According to officials at DOE and the national laboratories, the role of the government in a GOCO arrangement is to oversee the contract and the contractor, as well as to provide direction to the management of the laboratory. They added that the contractor manages the science conducted, and can expand and contract easily to bring in needed expertise to support operations as research priorities and user needs evolve, and since the contractor is not constrained by federal General Schedule pay scales, it can offer high salaries to secure world-class scientific talent. WHOI has a central management body, but was the only facility we studied that does not manage its own peer-review process or select the research conducted in its facilities. Instead, WHOI has the agency sponsoring the research manage this process, in part because most of WHOI’s research ships are owned by NSF and the Office of Naval Research, and the agency that owns a ship gets priority for use of the research facilities. NASA officials told us they think that the ISS falls into a similar model as WHOI because its National Laboratory facilities are open for use by any interested party that can provide its own funding, and while NASA evaluates and selects its own ISS research, it leaves the selection of ISS National Laboratory research to the sponsors of the research. However, WHOI is a member of University National Oceanographic Laboratory System (UNOLS), a central organization that is involved in monitoring, prioritizing, and scheduling research that will be conducted on various ocean laboratory vessels. According to UNOLS documentation, it has an elected UNOLS Council with broad representation⎯more than 61 academic institutions and national laboratories are part of UNOLS⎯and it provides some strategic research selection and prioritization functions to make efficient use of finite resources. According to NASA, ISS National Laboratory research is managed through the Assistant Associate Administrator for the ISS in SOMD, working in cooperation with the ISS National Laboratory Office, which is within the ISS Payloads Office. NASA officials told us that the role of these offices is to optimize and maximize available ISS resources, but that the ISS National Laboratory Office does not determine the content of the science flown to the ISS, but relies on the sponsor to evaluate the research. Instead, NASA prioritizes payloads based on operational or tactical needs, such as if there is a need for parts or spares to be flown to the ISS and if NASA can accommodate the research. Because of the congressional designation of the ISS as a national laboratory, NASA has opened the ISS up to several additional organizations other than NASA to select and fund science on the ISS. Some existing sponsors include (1) NASA, through either ESMD or SOMD; (2) other government agencies that have signed MOUs with NASA, including NIH, USDA, the Department of Defense (DOD), and DOE; (3) commercial or nonprofit organizations that have signed Space Act agreements with NASA; (4) organizations that have other formal partnerships with NASA, for example, NSBRI, which has a cooperative agreement with NASA; and (5) the international partners. According to NASA, as with WHOI, content of the ISS research selected is decentralized and conducted by the sponsor, and each sponsor has its own priorities for the research it supports. Additionally, NASA officials told us that though most research⎯including NASA, DOD, and NIH research⎯is subjected to a peer-review process to ensure that the investigation has scientific merit, other (especially commercial) research is not necessarily peer reviewed. Thus, the ISS currently lacks one central body that oversees the selection and prioritization of all U.S. ISS research and that can strategically decide what research should be conducted and at what time. This may become more problematic if there is future overlapping demand for ISS facilities from various users, including NASA, other federal agencies, and the academic and corporate sectors. NASA has considered management alternatives to coordinate ISS research, including FFRDC or GOCO arrangements, as well as cooperative agreements, a government corporation, and hybrid structures. NASA has also reported several times on this issue, including in its 1998 plan for the ISS where making a special non-governmental organization (NGO) responsible for selecting and planning research onboard the ISS was discussed, and more generally in its 2005 Organizational Model Evaluation Team report. Other entities have also recommended that NASA establish such a management structure. For example, the National Research Council recommended that NASA establish an NGO to manage the ISS under the direction of institutions representing the research community; in 2000, the Computer Sciences Corporation recommended the creation of a space station utilization and research institute to manage ISS utilization. Congress has also directed NASA to develop plans involving an external management body: in the National Aeronautics and Space Administration Authorization Act of 2000, Congress instructed the agency to submit an implementation plan to incorporate the use of an NGO to conduct research utilization and commercialization management activities of the ISS, and the NASA Authorization Act of 2008 required NASA to develop a plan to support operations and utilization of the ISS beyond 2015, including a research management plan that identified who would manage United States research. Potential management structures noted by the act included an internal NASA office or an external relationship governed by a contract, cooperative agreement, or a grant arrangement. NASA’s plan submitted in response to this requirement did not mention management by any outside agency. NASA officials told us that they are currently evaluating options for a future management structure for the ISS that may include an external entity, but that they have concerns. For example, they stated that they are concerned that adding a layer of bureaucracy between NASA operations and researchers could further complicate the process of getting investigations onto the ISS. Additionally, they do not think it is wise to establish such a management structure too early, for example, before the transportation challenge is addressed. Further, NASA officials told us that they are concerned that such a structure has an appropriate mix of internal and external expertise, and that having the appropriate personnel is ultimately more important than the type of structure (such as a GOCO versus another structure) selected. NASA officials also told us that they cannot select all U.S. ISS research because there is funding coming from numerous sponsors with various missions; however, the national laboratories we studied do not have only one funding agency either. For example, Argonne officials told us that they receive more than half of their funding from DOE but that the laboratory accommodates research sponsored by others. According to NASA officials, though it does not centrally select and prioritize all U.S. ISS research, it uses central tracking of research accomplishments and discipline-based working groups to prevent research duplication. The national laboratories and science programs we studied have capable in-house scientific and technical experts (generally provided by the management body) who can consult with and provide guidance to users. These institutions make a concerted effort to hire scientists with expertise relevant to the research conducted at that institute or laboratory. For instance, in addition to conducting their own research, the scientists and engineers who work for the management body are also available to assist visiting researchers in developing their research, drafting their proposals, and ultimately conducting their experiments. In some cases, staff scientists are available to provide user support 24 hours a day and 7 days a week. The national laboratories we studied consider use of in-house scientists and engineers to conduct research and to serve as advisors to lab users as a core competency. Because of internal restructuring in the recent past, NASA has decentralized its expertise in key scientific disciplines germane to ISS research, and a small number of personnel ultimately left the agency. According to congressional testimony given by an ISS researcher and according to others we spoke with, NASA has reassigned a number of experts within the agency whose experience would have been helpful for biological and microgravity research on board the ISS. Specifically, in the mid-1990s, NASA began making cuts to its gravitational biology program, and in 2004, it merged its Office of Biological and Physical Research, including the Physical Sciences Division, into ESMD. NASA ultimately eliminated research in these areas that was not deemed essential to achieving the Vision. Though NASA may have decided that these experts were not necessary based on its new internal direction in research goals, lack of these personnel complicates supporting other researchers using the available ISS research facilities and conducting research separate from NASA’s goals. For example, according to a senior official from the nonprofit USRA, NASA has a contract with USRA at Glenn Space Center to assist researchers conducting studies at the National Center for Microgravity Research because NASA no longer has the broad base of scientific experts available to provide this service to potential microgravity researchers. NASA directs other users to implementation partners, or companies that have scientific and technical expertise that can assist users in developing hardware and experiments. With NASA having lost scientific expertise in certain areas, there is a shortage of experts able to assist ISS researchers who are not conducting research pertinent to NASA’s goals in developing and conducting their experiments. The national laboratories and other large, user-based science institutes we studied place a high priority on conducting outreach to current and potential users and hold conferences and workshops on a regular basis for this purpose. For example, NSF hosts the New Investigator Workshop to recruit scientists who want to know more about the polar programs, and uses this opportunity to tell them how to draft a research proposal to conduct experiments in the Arctic and Antarctic. The national laboratories reserve portions of their budgets to pay for speakers to attend lectures and workshops, and they will also host “schools” where scientists can come together and stay at the laboratory to study the basic and advanced research techniques applicable to specific laboratory facilities. One facility at Brookhaven has developed a piggybacking concept in which new investigators are paired with an experienced user to learn how the science is conducted at the facility. Educational outreach is a tool used by the national laboratories and science institutes to lure not only scientists and companies but also to generate public interest. The national laboratories also participate in the National User Facility Organization, which consists of representatives from 30 user facilities, attracts about 25,000 users, and provides a unified voice for the scientific community and a forum for them to share their work. Officials we spoke with from several of these facilities told us that managing their user community and ensuring that their facilities were responsive to user needs was critical to ensuring continuing interest in using their facilities. NASA’s ability to do large-scale outreach initiatives on its own has been limited by existing resources and other factors. NASA’s ISS National Laboratory Office has a small staff (recently increased to five employees and not exclusively dedicated to outreach; NASA officials expect to eventually have as many as 10 staff), for outreach activities, and NASA conducts outreach with funding from its budget for space operations. NASA has reached out to researchers and other interested parties in an effort to attract users to the ISS National Laboratory. For example, the agency has established National Lab Pathfinders, where designated companies and other entities were identified by NASA for their ability to engage in early utilization of the ISS with the aim of inaugurating the ISS National Laboratory research program. According to NASA, this program has resulted in six flight experiments from commercial partners and two flight experiments from USDA. NASA has also teamed with NIH, which has made a recent program announcement for ISS research. NASA has conducted outreach to potential NIH grant applicants and participated in a meeting in June 2009 where NASA and NIH officials met with potential researchers to discuss ISS research capabilities. This meeting brought potential researchers together with NASA, NIH, and “implementation partners” that are able to supply researchers with specialized hardware for their research, and information about hardware and research capabilities was discussed. Based on our analysis, observations of outreach practices at other national laboratories and science institutions, and comments from researchers we spoke with, we believe that NASA needs to conduct more outreach and education. We were told that some potential researchers in industry were only informed about the ISS because they already had past employment or business ties with NASA or because they heard about ISS research opportunities via a third party advocating for ISS utilization. Others told us that they knew nothing of the value of ISS research until they had it explained to them on a one-on-one basis and that a broader education campaign might be a good way to interest more users. In addition to their other outreach efforts, the national laboratories we studied both have robust Web sites with considerable information that would be helpful in educating potential users. Though NASA has information on its ISS-related Web sites about the ISS and research conducted, the focus appears to be presenting successes rather than making user educational information⎯such as complete information on available hardware, available implementation partners, opportunities of microgravity research, and details about research results (including failures and the causes for any failures)⎯easy to find. Unless the decision is made to extend ISS operations, NASA has only 5 years to execute a robust research program before the station is deorbited, which is little time to establish a strong utilization program. A viable user base will not develop without sufficient launch opportunities to permit recurring access, consistent funding opportunities, sufficient crew time to conduct research, and longevity of the ISS. However, despite these challenges, the on-orbit laboratory offers the potential for scientific breakthroughs, a unique test bed for new technologies and applications, and a platform for increased international collaboration in research. Having a central body that is able to: represent all the ISS user communities (including NASA, other federal agencies, the commercial sector, and academia); oversee the selection of all ISS research; and ensure that the research being conducted is meritorious, peer-reviewed where appropriate, and not duplicative may assist in achieving full utilization of the ISS and its unique capabilities and maximize the possibility of achieving research successes on board the ISS. There is no direct analogue to how something like the ISS National Laboratory should or could be managed, so the specific structure that should be developed will require further consideration. If the decision is made to cease ISS operations in 2015 and to not provide additional resources for research, there are management actions focused on education and outreach that could be easily and quickly implemented to allow NASA to better support and inform users. If the decision is made to extend the ISS past its current retirement date of 2015 and to try to fully utilize all ISS research resources, then there are several major actions that NASA can take to build a robust user base and ensure that high-caliber science is being conducted. These actions will take more time⎯potentially years⎯and additional resources to implement. Though it may not be possible to establish a management structure similar to those found at other national laboratories that have been in existence for much longer than the ISS in the limited time remaining, NASA may be able to leverage existing agreements with management bodies to provide for a faster solution, or leverage the scientific and technical expertise of other sponsoring federal agencies (such as NIH) that have experience in conducting peer-reviewed research in areas pertinent to their missions. If the Administration and NASA decide to retire the station in 2015 and to continue utilizing the ISS without increasing resources, we recommend that the NASA Administrator take the following four steps: Develop and implement a plan to broaden and enhance ongoing outreach to potential users, including those in the commercial sector, with consideration given to the tight time frames for the ISS. Further develop online ISS information materials to provide easy access to details about laboratory facilities, opportunities presented by microgravity, available research hardware, resource constraints, and the results of all past ISS research, including successes and failures. As information develops, inform users on how launch capabilities will be provided to users of the ISS, including how regular these launches will be and what the cost will be (if any) to the users. If full utilization of available USOS facilities on board the ISS is not possible, consider sharing excess research capacity with the international partners on a quid pro quo basis. If the administration and NASA decide to extend ISS operations beyond 2015 and to provide the resources required for enhanced utilization of the ISS research facilities, we recommend that the NASA Administrator take the following three steps: Implement the first three steps recommended above. Establish a body that centrally oversees U.S. ISS research decision making, including the selection of all U.S. research to be conducted on board and ensuring that all U.S. ISS research is meritorious and valid. This body should also be able to strategically prioritize research proposed by many potential sponsors. Ensure that potential and actual ISS users have access to scientific or technical expertise, either in-house or external, in the areas of research relevant to the ISS that can provide assistance to users as required. In commenting on a draft of this report, NASA concurred with all seven recommendations. NASA’s written comments are reprinted in appendix II. NASA also provided technical comments which were incorporated as appropriate. We are sending copies of this report to NASA’s Administrator and interested congressional committees. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are provided in appendix III. To identify how the International Space Station (ISS) is being utilized at present, we reviewed National Aeronautics and Space Administration (NASA) documentation pertaining to available on-station hardware and current scientific investigations that are using this hardware, including the Consolidated Operations and Utilization Plan 2008-2015; the Reference Guide to the International Space Station; and NASA’s ISS Science Prioritization Desk Instruction. We also interviewed NASA officials at headquarters and Johnson Space Center, including officials from the Space Operations Mission Directorate (SOMD), and the Exploration Systems Mission Directorate (ESMD). We also spoke with officials from the Japanese Aerospace Exploration Agency (JAXA) and the European Space Agency (ESA). We also met with an official from The Boeing Company, which is the contractor responsible for the design, development, testing, and operation of the ISS. To identify how the ISS will be utilized once assembly is completed, we analyzed NASA documentation identifying available on-station hardware once assembly is complete and NASA projections for future NASA requirements. We also met with officials from NASA SOMD and NASA ESMD, and we spoke with researchers from academia, specifically researchers from North Carolina State University, Arizona State University, Case Western University, the University of Colorado-Boulder, Medical College of Wisconsin, Georgia Institute of Technology, Northwestern University, and Pennsylvania State University. These researchers were largely selected because they provided congressional testimony about conducting ISS research or because they were recommended as contacts by NASA or the National Academies of Science. We interviewed implementation partners for NASA, including BioServe Space Technologies and the Universities Space Research Association. We also attended NASA presentations to the National Academies of Science Decadal Survey on Biological and Physical Sciences in Space Committee regarding the ISS and its capabilities and utilization. It is important to note that no good metric exists for precisely quantifying the output of scientific research facilities, including the ISS. For example, number of experiments conducted is not a good metric for measuring utilization because it is unclear what baseline should be used for comparison, and the number of publications is not ideal since not all research is ultimately published. We also considered analyzing the use of electrical power on each utilization rack to determine how frequently they were powered up, but the racks do not have power meters and thus these data cannot be collected. To identify the challenges to fully maximizing the ISS, we interviewed NASA officials in the ISS Program Office as well as in NASA’s ESMD and SOMD and a former NASA official. We reviewed reports from the National Research Council⎯an organization consulted by NASA on its ISS research program⎯including Factors Affecting the Utilization of the International Space Station for Research in the Biological and Physical Sciences (2003), Institutional Arrangements for Space Station Research (1999), Review of Goals and Plans for NASA’s Space and Earth Sciences (2006), and Review of NASA Plans for the International Space Station (2006). We also met with officials from the National Academies of Science⎯whom NASA consulted on several occasions to review ISS research goals and management⎯and reviewed their report Elements of a Science Plan for the North Pacific Research Board. We reviewed the Computer Sciences Corporation’s International Space Station Operations Architecture Study (2000) that was prepared for NASA. We also interviewed former, current, and prospective scientists and researchers who have had experience conducting research onboard the ISS or who were interested in conducting future research, including the academic researchers listed above as well as officials from WiCell Research Institute, Zero Gravity Inc, and Ad Astra Rocket Company. We also spoke with officials from the Department of Agriculture, the National Space Biomedical Research Institute, and the National Institutes of Health and the National Space Biomedical Research Institute, which have existing agreements or memorandums of understanding with NASA to conduct ISS research. Further, we interviewed officials from the Universities Space Research Association and BioServe Space Technologies, both of which assist scientists in conducting space research with NASA. To determine how NASA is managing the ISS, we interviewed NASA officials and reviewed NASA plans and documentation, including its Consolidated Operations and Utilization Plan 2008; ISS Utilization Management Concept Development Study; Research and Utilization Plan for the International Space Station; Commercial Development Plan for the International Space Station; Reference Guide to the International Space Station; NASA ISS Prioritization Desk Instruction; Human Research Program: Integrated Research Plan; Advanced Capabilities Division: International Space Station (ISS) Science Portfolio, Determination and Management; NASA Report to Congress: Regarding a Plan for the International Space Station’s National Laboratory; Plan to Support Operations and Utilization of the International Space Station Beyond FY 2015; and NASA’s Organizational Model Evaluation Team Process, Analysis, and Recommendations. We also we reviewed NASA’s international partner agreements. We also reviewed various National Research Council reports, including Factors Affecting the Utilization of the International Space Station for Research in the Biological and Physical Sciences (2003), Institutional Arrangements for Space Station Research (1999), Review of Goals and Plans for NASA’s Space and Earth Sciences (2006), and Review of NASA Plans for the International Space Station (2006). We also reviewed the Computer Sciences Corporation’s ISS Operations Architecture Study (2000) and prior GAO reports. To determine how NASA’s management of the ISS compares to the management of other national laboratories and large science institutes, we spoke with officials at the Department of Energy (DOE) who are responsible for the DOE national laboratories. We also spoke with officials from the National Energy Technology Laboratory, which is DOE’s only government-owned, government-operated laboratory. Further, we visited Argonne National Laboratory (Illinois) and Brookhaven National Laboratory (New York), and spoke with officials at these laboratories representing the National User Facility Organization. We also spoke with officials from the Space Telescope Science Institute, which is the body that manages NASA’s Hubble Space Telescope. We selected these facilities in part because of NASA’s suggestions, and in part because they are all multidisciplinary facilities conducting a wide range of research tasks. To understand the challenges posed by conducting research in remote, hostile environments with high logistics costs, we spoke with officials at the Woods Hole Oceanographic Institute, which operates oceangoing research ships and submersibles in remote and potentially hazardous environments, and we met with officials from the National Science Foundation who are responsible for managing the Office of Polar Programs, which manages research conducted in the Arctic and Antarctic. These two programs offer some analogue to conducting research in space. We conducted this performance audit from November 2008 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, James L. Morrison, Assistant Director; Greg Campbell; Cheryl M. Harris; C. James Madar; Diana L. Moldafsky; Kenneth E. Patton; Timothy M. Persons; Leah L. Probst; and Alyssa B. Weir made key contributions to this report. NASA: Constellation Program Cost and Schedule Will Remain Uncertain Until a Sound Business Case Is Established. GAO-09-844. Washington, D.C.: August. 26, 2009. NASA: Commercial Partners Are Making Progress, but Face Aggressive Schedules to Demonstrate Critical Space Station Cargo Transport Capabilities. GAO-09-618. Washington, D.C.: June 16, 2009.
In 2010, after about 25 years of work and the expenditure of billions of dollars, the International Space Station (ISS) will be completed. According to the National Aeronautics and Space Administration (NASA), the ISS crew will then be able to redirect its efforts from assembling the station to conducting research. In 2005, Congress designated the ISS as a national laboratory; in addition, the NASA Authorization Act of 2008 required NASA to provide a research management plan for the ISS National Laboratory. In light of these developments, the Government Accountability Office (GAO) was asked to review the research use of the ISS. Specifically, GAO (1) identified how the ISS is being used for research and how it is expected to be used once completed, (2) identified challenges to maximizing ISS research; and (3) identified common management practices at other national laboratories and large science programs that could be applicable to the management of the ISS. To accomplish this, GAO interviewed NASA officials and reviewed key documents related to the ISS. GAO also studied two ground-based national laboratories and several large science institutions. The ISS has been continuously staffed since 2000 and now has a six-member crew. The primary objective for the ISS through 2010 is construction, so research utilization has not been the priority. Some research has been and is being conducted as time and resources permit while the crew on board performs assembly tasks, but research will is expected to begin in earnest in 2010. NASA projects that it will utilize approximately 50 percent of the U.S. ISS research facilities for its own research, including the Human Research Program, opening the remaining facilities to U.S. ISS National Laboratory researchers. NASA faces several significant challenges that may impede efforts to maximize utilization of all ISS research facilities, including: (1) the impending retirement of the Space Shuttle in 2010 and reduced launch capabilities for transporting ISS research cargo once the shuttle retires; (2) high costs for launches and no dedicated funding to support research; (3) limited time available for research due to the fixed size of crew and competing demands for the crew's time; and (4) an uncertain future for the ISS beyond 2015. NASA is researching the possibility of developing a management body--including internal and external elements--to manage ISS research, which would make the ISS National Laboratory similar to other national laboratories. Though there is no existing direct analogue to the ISS, GAO studied two national laboratories and several other large science institutions and identified three common practices that these institutions employ that could benefit the management of ISS research. (1) Centralized management body: At each of the institutions GAO studied, there is a central body responsible for prioritizing and selecting research, even if there are different funding agencies. NASA's ISS managers are currently not responsible for evaluating and selecting all research that will be conducted on the ISS, leaving this to the research sponsor. (2) In-house scientific and technical expertise: The institutions GAO studied have large staffs of in-house experts that can provide technical and engineering support to users. NASA's staff members in ISS fundamental science research areas have been decentralized or reassigned, limiting its capability to provide user support. (3) Robust user outreach: The laboratories and institutes GAO studied place a high priority on user outreach and are actively involved in educating and recruiting users. NASA has conducted outreach to potential users in the public and private sectors, but its outreach is limited in comparison.
The law and regulations that govern federal procurement are designed to foster competition and to promote desirable social objectives, among other goals. The Congress has long encouraged agencies to ensure that small businesses have an opportunity to participate in federal procurements and has authorized agencies to reserve certain requirements for award to small businesses. For example, in 1988 the Congress established an annual governmentwide goal of awarding not less than 20 percent of prime contract dollars to small businesses and in 1997 increased this goal to 23 percent. When all the laws and regulations to achieve the procurement system's objectives were considered, some came to believe that the result was a complex and unwieldy system that left little room for agencies to exercise sound business judgment in satisfying their needs. Two pieces of reform legislation—the Federal Acquisition Streamlining Act of 1994 (FASA) and the Clinger-Cohen Act of 1996—were passed to address these problems as well as other government acquisition and investment-related concerns. Each act included provisions designed to streamline the procurement system, increase its responsiveness, and make it more efficient. As agencies began to implement acquisition reform initiatives, representatives of small businesses began to express concerns that the initiatives would have an adverse effect on small businesses. Agencies combined existing contracts into fewer, larger contracts—referred to as "bundled contracts"—to streamline procurement and reduce contract administration costs. Questions were raised about the extent to which contract requirements were being bundled and the effect that such bundling had on small businesses' ability to participate in federal procurement. In light of these concerns, the Congress amended the Small Business Act to create a legislative definition of contract bundling. As amended, the act defines contract bundling as the consolidation of two or more procurement requirements for goods or services previously provided or performed under separate, smaller contracts into a solicitation of offers for a single contract that is likely to be unsuitable for award to a small business concern because of the diversity, size, or specialized nature of the elements of the performance specified; the aggregate dollar value of the anticipated work; the geographic dispersion of performance sites; or any combination of these three criteria. The statute also defines a "separate, smaller contract" as a contract that has been performed by one or more small businesses or was suitable for award to one or more small businesses. The Small Business Act, as amended, states that, to the maximum extent practicable, agencies shall avoid unnecessary and unjustified bundling of contract requirements that precludes small businesses' participation in procurements as prime contractors. For those contracts considered to be bundled, the Small Business Act establishes criteria for determining whether contract bundling was necessary and justified, and requires agencies that intend to bundle requirements to document that these criteria have been met. Our analysis of overall data on construction contract awards indicates that small businesses are continuing to win work and that their ability to compete is not being impaired. Since 1997, construction contract awards to small businesses have increased steadily in the face of a decline in overall construction awards. As table 1 shows, awards to small businesses increased from about $1.6 billion to about $1.9 billion from fiscal year 1997 through fiscal year 2000 (in constant fiscal year 1999 dollars) while overall awards declined from about $6.6 billion to about $5.9 billion. Consequently, the share of awards going to small businesses increased from about 25 to about 32 percent. Our analysis also showed that this trend occurred despite an increase in awards to foreign firms and domestic firms performing abroad. The proportion of total DOD construction awards going to such firms increased from 10 percent in fiscal year 1997 to 14 percent in fiscal year 2000. Contracting officials pointed out that small business construction firms generally confine their operations to a specific region or geographic area, sometimes pursuing work only in the metropolitan area where the firm is headquartered. According to the officials, small business construction firms would typically not have the resources to perform work abroad and would be very unlikely to win contracts for such projects. Because the overall data do not identify bundled contracts, we were not able to measure the extent of contract bundling directly. Accordingly, we reviewed selected contracts to assess whether agencies were consolidating requirements. Of the 26 contracts we reviewed, 5 were large contracts that consolidated requirements to the point of limiting small businesses' participation. These particular contracts combined the components of a multiple-facility project under a single contract. Officials analyzed these projects to assess whether the work could accommodate smaller contractors but concluded that only by having a single contractor build the entire project could the work be performed efficiently. For example, the Navy requested proposals for the construction of a complex of eight facilities at the Stennis Space Center in Mississippi to house a Special Operations Forces unit. The Navy estimated the cost of the complex at $24.2 million. A large business received the contract, and the Navy official responsible for monitoring small business contracting indicated that small businesses would have had difficulty undertaking a project of this size. The contracting officer told us that because these facilities were clustered on a compact site and were served by common cooling and mechanical systems, a single contract was awarded for constructing the entire complex. Space at the construction site would not have accommodated multiple contractors. Contracting officials told us that when contracting to construct a multiple-facility project, they have historically considered whether components of the project can be acquired through separate, smaller contracts suitable for award to small businesses. However, if an analysis of site and project characteristics indicated that a single contractor would be necessary in order for the work to be performed efficiently, a single contract would be awarded. In cases like these, Small Business Administration (SBA) representatives normally review planned construction contracts and—when it appears unlikely that small businesses will be able to compete for a contract—may recommend alternative contracting approaches that will increase opportunities for small businesses. At the two locations we visited, contracting officers had submitted each of the contracts we reviewed to the appropriate SBA representatives and received approval to proceed with their planned contracting approach. Another six of the contracts we reviewed involved ordering construction projects under task-order contracts. In these cases, small businesses were able to participate. Task-order contracts define the broad outlines of the government's needs and permit the government to place orders to acquire specific work over a fixed period within stated dollar limits. Under FASA, agencies may award task-order contracts as part of initiatives to streamline federal procurement. To encourage competition, the Congress established a preference for awarding task-order contracts to multiple contractors rather than to a single one and for providing each of the contractors an opportunity to be considered for specific orders. To preserve the simplicity and flexibility of administering task-order contracts, the Congress provided contracting officers broad discretion to define the procedures used to evaluate offers and select contractors when placing orders. According to contracting officials, placing orders under task-order contracts allows them to acquire construction work more quickly and with less administrative effort than awarding individual contracts. Small businesses won some task-order contracts at the locations we visited. For example, the Army awarded six task-order contracts that provided for ordering construction and incidental design services over a 4- year period, including options. While the Army expected that individual projects would be valued in the $100,000 to $500,000 range, the Army could order up to $5 million in work annually or $20 million over the 4- year period. The contracts called for contractors to submit competitive proposals on orders and for the Army to select the most advantageous proposal. The Army awarded two of the six task-order contracts through competitions limited to small disadvantaged businesses participating in the 8(a) program. In the competition for orders under the contracts, these two small disadvantaged businesses won $4 million, or about 28 percent, of the work acquired under the six contracts through November 2000. Another nine of the contracts we reviewed combined the requirements for design and construction work on a single facility under a single contract. In these cases, again, small businesses were able to participate. Agencies have traditionally awarded separate contracts for design and construction work. As part of the Clinger-Cohen Act's initiatives to streamline federal procurement, however, the Congress authorized agencies to award single contracts covering both design and construction work, referred to as "design-build contracts." Under the statute, agencies use a two-phase approach to selecting a design-build contractor, initially inviting contractors to submit information on their qualifications and technical approach to the work. Agencies use this information to identify the most highly qualified contractors and invite these firms to submit more-detailed information, such as design concepts and cost or price data. On the basis of their experience to date, officials indicated that using design-build contracts has enabled them to reduce project completion times and costs. Small businesses competed successfully for design-build contracts at the locations we visited. For example, the Navy requested proposals for the design and construction of a wharf and an associated administrative, shop, and storage building estimated to cost about $8.4 million. Initially, 12 firms submitted information on their capabilities and past performance. After evaluating this information, the Navy concluded that two small businesses and one large business were best qualified to undertake the project and invited these firms to submit a design proposal and price for the work. Navy evaluators considered the design solutions submitted by the two small businesses to be superior to that submitted by the large business. Since one of the small businesses also proposed the lowest price, this firm was awarded a contract for the work. Contracting officials pointed out that, to compete successfully for design-build contracts, construction firms must team up with design firms. Of these nine design-build contracts, small businesses won two and were considered among the most highly qualified contractors in the competition for two others. Lastly, of the remaining six contracts, five were separate contracts covering the construction of single facilities for which complete designs had been previously prepared. Small businesses won three of these five contracts. Finally, the last of the six contracts covered the design and construction of two closely related facilities. This final project was modest in scope, having an estimated cost of $5.7 million and—although this contract was not awarded to a small business—two small businesses were considered among the most highly qualified contractors competing for the contract. DOD and SBA reviewed a draft of this report. DOD's Director of Small and Disadvantaged Business Utilization told us that DOD had no comments on the draft. SBA's written comments are contained in appendix I. SBA indicated that the report's analysis is useful in improving an understanding of contract bundling and contract consolidation. SBA noted that the report does not discuss the Small Business Competitiveness Demonstration Program and its applicability to construction contracts. An evaluation of this program was beyond the scope of this review. SBA suggested that we include an appendix detailing the cases reviewed. Accordingly, we have incorporated a list of the contracts reviewed in our discussion of the scope and methodology of the review. To identify trends in DOD's contracting for construction and use of small business contractors, we analyzed data from DOD's prime contract database for fiscal years 1997 through 2000. Using this database, we determined trends in total obligations on contracts for construction work by converting obligations into constant fiscal year 1999 dollars and using gross domestic product deflator indexes in the President's Budget submission applicable to military outlays. In addition, we determined the shares of total obligations going to various classifications of business entities. We did not independently verify the accuracy of the information in DOD's database. To assess the extent to which DOD's contracting officers had combined construction requirements, we reviewed the laws and implementing regulations defining contract bundling and reviewed large contracts for construction awarded at selected contracting offices. Using DOD's prime contract database, we ranked DOD's contracting offices in terms of total dollars awarded for general repair and construction work in fiscal year 1999. (Data for fiscal year 2000 were not available at the time we were planning our work.) After ranking DOD's contracting offices, we reviewed contracts at the highest-ranked Army and Navy contracting offices: the Army Corps of Engineers' Mobile District, Mobile, Alabama, and the Naval Facilities Engineering Command's Southern Division, Charleston, South Carolina. At these two locations, we reviewed all contracts valued at $5 million or more awarded during fiscal year 2000 for construction in the United States. We did not review contracts at an Air Force contracting office because the Army and Navy provide the Air Force with contracting support and the 28 highest-ranked offices were either Army or Navy contracting offices. Table 2 lists the 26 contracts—valued at $347 million— selected for review. For these contracts, we reviewed contract documentation to determine whether requirements had been combined, the reasons cited for combining requirements, and the extent of small businesses' participation in competition for the contracts. We also discussed these issues with contracting officials, the contracting offices' small business utilization monitors, and SBA representatives responsible for overseeing the selected contracting offices. Our results cannot be generalized to the universe of construction contract awards. We conducted our review from November 2000 through May 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense and the Acting Administrator of the Small Business Administration. We will make copies available to others on request. If you have any questions regarding this report, please contact me on (202) 512-4841 or Ralph Dawn on (202) 512-4544. Other key contributors to this report were Monty Peters, Ralph Roffo, and John Van Schaik. Small Business: Trends in Federal Procurement in the 1990s (GAO-01-119, Jan. 18, 2001). Federal Procurement: Trends and Challenges in Contracting With Women-Owned Small Businesses (GAO-01-346, Feb. 16, 2001). Small Businesses: Limited Information Available on Contract Bundling's Extent and Effects (GAO/GGD-00-82, Mar. 31, 2000). Defense Contracting: Sufficient, Reliable Information on DOD's Mentor- Protege Program Is Unavailable (GAO/NSIAD-98-92, Mar. 30, 1998). Base Operations: DOD's Use of Single Contracts for Multiple Support Services (GAO/NSIAD-98-82, Feb. 27, 1998).
Congress appropriates billions of dollars annually to construct buildings and other facilities for military training and operations. Small business have carried out a significant portion of this work. Congress and small business advocates, however, had become concerned that agencies were combining requirements into larger contracts that small businesses could not win. GAO examined the contract bundling of military construction requirements. GAO determined whether (1) overall data on construction contract awards to small businesses indicated that their ability to compete for contracts had been impaired and (2) selected Department of Defense (DOD) contracting offices had combined construction requirements in ways that hampered small businesses' ability to compete. Overall data on military construction contract awards to small businesses revealed that small businesses are generally continuing to win work and that their ability to compete is not being impaired. The Small Business Administration reviewed and approved of DOD's plan to determine whether the construction work being done could accommodate smaller contractors. Small businesses were able to compete for the remaining contracts.
Generally, we have broad authority to evaluate agency programs and investigate matters related to the receipt, disbursement, and use of public money. To carry out our audit responsibilities, we have a statutory right of access to agency records. Specifically, federal agencies are required to provide us information about their duties, powers, activities, organization, and financial transactions. In concert with our statutory audit and evaluation authority, this provision gives GAO a broad right of access to agency records, including records of the Intelligence Community, subject to a few limited exceptions. GAO’s access statute authorizes enforcement of GAO’s access rights through a series of steps specified in the statute, including the filing of a civil action to compel production of records in federal district court. However, GAO may not bring an action to enforce its statutory right of access to a record relating to activities that the President designates as foreign intelligence or counterintelligence activities. GAO’s statutory authorities permit us to evaluate a wide range of activities in the Intelligence Community, including the management and administrative functions that intelligence agencies, such as the Central Intelligence Agency (CIA), have in common with all federal agencies. However, since 1988, the Department of Justice (DOJ) has maintained that Congress intended the intelligence committees to be the exclusive means of oversight, effectively precluding oversight by us. In our 2001 testimony about GAO’s access to information on CIA programs and activities, we noted that in 1994 the CIA Director sought to further limit our audit work of intelligence programs, including those at DOD. In 2006, the ODNI agreed with DOJ’s 1988 position, stating that the review of intelligence activities is beyond GAO’s purview. While we strongly disagree with DOJ and the ODNI’s view, we foresee no major change in limits on our access without substantial support from Congress—the requestor of the vast majority of our work. Congressional impetus for change would have to include the support of the intelligence committees, which have generally not requested GAO reviews or evaluations of CIA’s or other intelligence agencies’ activities for many years. With support, however, we could evaluate some of the basic management functions that we now evaluate throughout other parts of the federal government, such as human capital, acquisition, information technology, strategic planning, organizational alignment, and financial and knowledge management. As this Subcommittee is well aware, the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) established the Director of National Intelligence to serve as the head of the Intelligence Community; act as the principal advisor to the President, the National Security Council, and the Homeland Security Council for intelligence matters related to national security; and oversee and direct the implementation of the National Intelligence Program. Since its inception, the ODNI has undertaken a number of initiatives, including the development of both 100- and 500-day plans for integration and collaboration. One of the core initiatives of these plans is to modernize the security clearance process across the Intelligence Community and at the national level, where other federal agencies, including DOD, OMB, and Office of Personnel Management (OPM) are also engaged. Among other things, IRTPA also directed the President to select a single department, agency, or element of the executive branch to be responsible for day-to-day oversight of the government’s security clearance process. In June 2005, the President issued an executive order that assigned OMB responsibility for ensuring the effective implementation of a policy that directs agency functions related to determinations of personnel eligibility for access to classified information be uniform, centralized, efficient, effective, timely, and reciprocal. In its new capacity, OMB assigned the responsibility for the day-to-day supervision and monitoring of security clearance investigations, as well as for tracking the results of individual agency-performed adjudications, to OPM. With respect to (1) personnel employed or working under a contract for an element of the Intelligence Community and (2) security clearance investigations and adjudications for Sensitive Compartmented Information, OMB assigned the responsibility for supervision and monitoring of security clearance investigations and tracking adjudications to the ODNI. In May 2006, OMB’s Deputy Director for Management stated during a congressional hearing that the agency’s oversight role in improving the governmentwide clearance process might eventually be turned over to the ODNI. For decades, we have assisted Congress in its oversight role and helped agencies with disparate missions to improve the economy, effectiveness, and efficiency of their operations and the need for interagency collaboration in addressing 21st century challenges, and we could assist the intelligence and other appropriate congressional committees in their oversight of the Intelligence Community as well. Our work also provides important insight on matters such as best practices to be shared and benchmarked and how government and its nongovernmental partners can become better aligned to achieve important outcomes for the nation. In addition, GAO provides Congress with foresight by highlighting the long- term implications of today’s decisions and identifying key trends and emerging challenges facing our nation before they reach crisis proportions. For the purpose of this hearing, I will discuss our extensive experience in addressing governmentwide human capital issues and other management issues that can assist the intelligence and other appropriate congressional committees in their oversight of Intelligence Community transformation and related management reforms. GAO has identified a number of human capital transformation and management issues over the years, such as acquisition, information technology, strategic planning, organizational alignment, financial and knowledge management, and personnel security clearances, as cross- cutting, governmentwide issues that affect most federal agencies, including those within the Intelligence Community. Human capital transformation and management issues have also been repeatedly identified as areas of weakness within the Intelligence Community by other organizations, including the Subcommittee on Oversight, House Permanent Select Committee on Intelligence; the Congressional Research Service; and independent commissions, such as the 9/11 Commission and Weapons of Mass Destruction Commission. Moreover, the ODNI has acknowledged that Intelligence Community agencies face some of the governmentwide challenges that we have identified, including integration and collaboration within the Intelligence Community workforce and inefficiencies and reciprocity of personnel security clearances. Significant issues affecting the Intelligence Community include strategic human capital transformation and reform issues, DOD’s new pay-for- performance management system called NSPS, the extent to which agencies rely on, oversee, and manage their contractor workforce, and personnel security clearances. In fact, we have identified some of these programs and operations as high-risk areas due to a range of management challenges. GAO and others have reported that the Intelligence Community faces a wide range of human capital challenges, including those dealing with recruiting and retaining a high-quality diverse workforce, implementation of a modernized performance management system, knowledge and skill gaps, integration and collaboration, and succession planning. Our extensive work on government transformation distinctly positions us to assist the intelligence and other appropriate congressional committees to oversee the Intelligence Community’s efforts to address these human capital challenges as well as to inform congressional decision making on management issues. Our work on governmentwide strategic human capital management is aimed at transforming federal agencies into results- oriented, high-performing organizations. Transformation is necessary because the federal government is facing new and more complex challenges than ever before, and agencies must re-examine what they do and how they do it in order to meet those challenges. Central to this effort are modern, effective, economical, and efficient human capital practices, policies, and procedures integrated with agencies’ mission and program goals. In 2001, we added strategic human capital management to the list of governmentwide high-risk areas because of the long-standing lack of a consistent strategic approach for marshaling, managing, and maintaining the human capital needed to maximize government performance and ensure its accountability. Although the federal government made progress in addressing these issues in the years that followed, we found that more can be done in four key areas: (1) top leadership in agencies must provide the attention needed to address human capital and related organizational transformation issues; (2) agencies’ human capital planning efforts need to be fully integrated with mission and program goals; (3) agencies need to enhance their efforts to acquire, develop, and retain talent; and (4) organizational cultures need to promote high performance and accountability. Based on our experience in addressing agencies’ performance management challenges, we are uniquely positioned to help Congress evaluate such issues within the Intelligence Community, including the development and implementation of its pay-for-performance personnel management system. As an example of our experience in this area, I would like to highlight our work on DOD’s new civilian personnel management system—the NSPS—which has provided Congress with insight on DOD’s proposal, design, and implementation of this system. The National Defense Authorization Act for Fiscal Year 2004 provided DOD with authority to establish a new framework of rules, regulations, and processes to govern how the almost 700,000 defense employees are hired, compensated, promoted, and disciplined. Congress provided these authorities in response to DOD’s position that the inflexibility of the federal personnel systems was one of the most important constraints to the department’s ability to attract, retain, reward, and develop a civilian workforce to meet the national security mission of the 21st century. Prior to the enactment of the NSPS legislation in 2003, we raised a number of critical issues about the proposed system in a series of testimonies before three congressional committees. Since then, we have provided congressional committees with insight on DOD’s process to design its new personnel management system, the extent to which DOD’s process reflects key practices for successful transformation, the need for internal controls and transparency of funding, and the most significant challenges facing DOD in implementing NSPS. Most important, we have noted in testimonies and reports that DOD and other federal agencies must ensure that they have the necessary institutional infrastructure in place before implementing major human capital reform efforts, such as NSPS. This institutional infrastructure includes, at a minimum, a human capital planning process that integrates the agency’s human capital policies, strategies, and programs with its program goals, mission, and desired outcomes; the capabilities to effectively develop and implement a new human capital system; and the existence of a modern, effective, and credible performance management system that includes adequate safeguards to ensure a fair, effective, nondiscriminatory, and credible implementation of the new system. While GAO strongly supports human capital reform in the federal government, how it is done, when it is done, and the basis upon which it is done can make all the difference in whether such efforts are successful. An additional major issue of growing concern, both within and outside the Intelligence Community, deals with the type of work that is being performed by contractors, the need to determine the appropriate mix of government and contractor employees to meet mission needs, and the adequacy of oversight and accountability of contractors. These are areas where we also are well-positioned to provide additional support to the intelligence committees. While there are benefits to using contractors to perform services for the government—such as increased flexibility in fulfilling immediate needs—GAO and others have raised concerns about the federal government’s increasing reliance on contractor services. A key concern is the risk associated with contractors providing services that closely support inherently governmental functions. Inherently governmental functions require the exercise of discretion in applying government authority and/or in making decisions for the government; as such, they should be performed by government employees, not contractors. In 2007, I testified before the Senate Committee on Homeland Security and Governmental Affairs that the proper role of contractors in providing services to the government was the topic of some debate. I would like to reiterate that, in general, I believe there is a need to focus greater attention on which functions and activities should be contracted out and which should not, to review and reconsider the current independence and conflict-of-interest rules relating to contractors, and to identify the factors that prompt the government to use contractors in circumstances where the proper choice might be the use of civil servants or military personnel. Similarly, it is important that the federal government maintain an accountable and capable workforce, responsible for strategic planning and management of individual programs and contracts. In a September 2007 report, we identified a number of concerns regarding the risk associated with contractors providing services that closely support inherently governmental functions. For example, an increasing reliance on contractors to perform services for core government activities challenges the capacity of federal officials to supervise and evaluate the performance of these activities. The Federal Acquisition Regulation (FAR) provides agencies examples of inherently governmental functions that should not be performed by contractors. For example, the direction and control of intelligence and counter-intelligence operations are listed as inherently governmental functions. Yet in 2006, the Director of National Intelligence reported that the Intelligence Community finds itself in competition with its contractors for employees and is left with no choice but to use contractors for work that may be “borderline inherently governmental.” Unless the federal government, including Intelligence Community agencies, pays the needed attention to the types of functions and activities performed by contractors, agencies run the risk of losing accountability and control over mission-related decisions. For more than 3 decades, GAO’s reviews of personnel security clearances have identified delays and other impediments in DOD’s personnel security clearance program, which maintains about 2.5 million clearances, including clearances for intelligence agencies within DOD. These long- standing problems resulted in our adding the DOD personnel security clearance program to our high-risk list in January 2005. One important outgrowth of this designation has been the level of congressional oversight from this Subcommittee, as well as some progress. In the past few years, several positive changes have been made to DOD— as well as governmentwide—clearance processes because of increased congressional oversight, recommendations from our work, and new legislative and executive requirements. One of OMB’s efforts to improve the security clearance process involved taking a lead in preparing a November 2005 strategic plan to improve personnel security clearance processes governmentwide. In its February 2007 and 2008 annual IRTPA- mandated reports to Congress, OMB noted additional improvements that had been made to the security clearance process governmentwide. For example, OMB had issued standards for reciprocity (an agency’s acceptance of a clearance issued by another agency), OPM had increased its investigative workforce, and DOD and other agencies had dramatically increased their use of OPM’s Electronic Questionnaires for Investigations Processing system to reduce the time required to get a clearance by 2 to 3 weeks. Further, the Director of National Intelligence, the Under Secretary of Defense for Intelligence, and OMB’s Deputy Director for Management established a team, the Joint Security Clearance Process Reform Team, to improve the security clearance process. The team is to develop a transformed, modernized, and reciprocal security clearance process that is supposed to be universally applicable to DOD, the Intelligence Community, and other federal agencies. The extent to which this new process will be implemented governmentwide, or whether leadership of the new system will be assigned to the ODNI, however, remains uncertain. Any attempts to reform the current security clearance process, regardless of which agency or organization undertakes the effort, should include some key factors. Specifically, current and future efforts to reform personnel security clearance processes should consider, among other things, determining whether clearances are required for positions, incorporating more quality control throughout the clearance processes to supplement current emphases on timeliness, establishing metrics for assessing all aspects of clearance processes, and providing Congress with the long-term funding requirements of security clearance reform. Although we have not worked with the entire Intelligence Community as part of our body of work on security clearances, we have worked with DOD intelligence agencies. For example, in the period from 1998 through 2001, we reviewed National Security Agency clearance investigative reports and Defense Intelligence Agency adjudicative reports. Similarly, our February 2004 report examined information about adjudicative backlogs DOD-wide and the situation in those two intelligence agencies. Importantly, since 1974, we have been examining personnel security clearances mostly on behalf of Congress and some on behalf of this Subcommittee. Through scores of reports and testimonies, we have acquired broad institutional knowledge that gives us a historical view of key factors that should be considered in clearance reform efforts. We are well positioned to assist Congress in its oversight of this very important area. In addition to our work on human capital transformation and personnel security clearance issues, our recent work has also addressed management issues—such as ISR systems, space acquisitions, and the space acquisition workforce—that directly affect the Intelligence Community and illustrate our ability to further support the intelligence and other appropriate congressional committees in their oversight roles. GAO’s highly qualified and experienced staff—including its analysts, auditors, lawyers, and methodologists—and secure facilities position us to perform intensive reviews that could be useful in assessing the transformation and related management reforms under consideration within the Intelligence Community, especially in connection with human capital and acquisition and contracting-related issues. GAO personnel who might perform work relating to the Intelligence Community have qualifications, skills, expertise, clearances and accesses, and experience across the federal government, in the national security arena, and across disciplines. For example, GAO methodologists have expertise in designing and executing appropriate methodological approaches that help us develop recommendations to improve government operations. Our attorneys advise GAO’s analysts, issue external legal decisions and legal opinions, and prepare testimony, legislation, and reports on subjects reflecting the range of government activity. This legal work, for example, involves subjects such as information technology, international affairs and trade, foreign military sales, health and disability law, and education and labor law. GAO also already has personnel with appropriate clearances and accesses. I would like to highlight a couple of examples of GAO’s work to demonstrate our expertise and capacity to perform intensive reviews in intelligence-related matters. In the past year, we have testified and issued reports addressing DOD’s ISR systems, including unmanned aircraft systems. The term “ISR” encompasses multiple activities related to the planning and operation of sensors and assets that collect, process, and disseminate data in support of current and future military operations. Intelligence data can take many forms, including optical, radar, or infrared images, or electronic signals. In April 2007, we testified that DOD has taken some important first steps to formulate a strategy for improving the integration of future ISR requirements, including the development of an ISR Integration Roadmap and designating ISR as a test case for its joint capability portfolio management concept. We also testified that opportunities exist for different services to collaborate on the development of similar weapon systems as a means for creating a more efficient and affordable way of providing new capabilities to the warfighter. As part of another review of ISR programs, we found that nearly all of the systems in development we examined had experienced some cost or schedule growth. As part of our work, we selected 20 major airborne ISR programs and obtained information on current or projected operational capabilities, acquisition plans, cost estimates, schedules, and estimated budgets. We analyzed the data to determine whether pairs of similar systems shared common operating concepts, capabilities, physical configurations, or primary contractors. We reviewed acquisition plans for programs in development to determine whether they had established sound business cases or, if not, where the business case was weak. We reviewed cost and schedule estimates to determine whether they had increased and, where possible, identified reasons for the increases. Based on our research and findings, we recommended that DOD develop and implement an integrated enterprise-level investment strategy, as well as report to the congressional defense committees the results of ISR studies underway and identify specific plans and actions it intends to take to achieve greater jointness in ISR programs. DOD generally agreed with our recommendations. We have also performed in-depth reviews of individual space programs that are shared with the Intelligence Community. For example, in recent years we have examined the Space Radar program, which is expected to be one of the most complex and expensive satellite developments ever. We reported that while the program was adopting best practices in technology development, its schedule estimates may be overly optimistic and its overall affordability for DOD, which was parternering with the Intelligence Community, was questionable. Our concerns were cited by the Senate Select Committee on Intelligence in its discussion of reasons for reducing funding for Space Radar. Our work on the space acquisition workforce is another example of in- depth programmatic reviews we have been able to perform addressing intelligence-related matters. In a September 2006 report, we identified a variety of management issues dealing with Air Force space personnel. This is a critical issue because the Air Force provides over 90 percent of the space personnel to DOD, including the National Reconnaissance Office (NRO). We found that the Air Force has done needs assessments on certain segments of its space workforce, but it has not done an integrated, zero-based needs assessment of its space acquisition workforce. In the absence of an integrated, zero-based needs assessment of its space acquisition workforce and a career field specialty, the Air Force cannot ensure that it has enough space acquisition personnel or personnel who are technically proficient to meet national security space needs—including those in the Intelligence Community. As a part of this work, we collected and analyzed Air Force personnel data in specific specialty codes related to space acquisition and tracked their career assignments, training, and progression, including those assigned to the NRO. For example, we collected and analyzed data on space acquisition positions and personnel from multiple locations, and conducted discussion groups about topics including education and prior experience with junior and midgrade officers at the Space and Missile Systems Center in California. We made recommendations to DOD to take actions to better manage its limited pool of space acquisition personnel, and DOD generally agreed with our findings and recommendations. Our ability to continue monitoring security clearance-related problems in DOD as well as other parts of the federal government and to provide Congress with information for its oversight role could be adversely affected if the ODNI assumes management responsibility over this area. First, in 2006, OMB’s Deputy Director for Management has suggested that the agency’s oversight role of the governmentwide security clearance process might be transferred to the ODNI. Alternatively, the ODNI could assume leadership, to some extent, of a new security clearance process that is intended for governmentwide implementation by a team established by the Director of National Intelligence, the Under Secretary of Defense for Intelligence, and OMB’s Deputy Director for Management. While we have the legal authority to audit the personnel security clearance area if its oversight is moved to the ODNI or if the Joint Security Clearance Process Reform Team’s proposed process is implemented governmentwide, we could face difficulties in gaining the cooperation we need to access the information. Although we have established and maintained a relatively positive working relationship with the ODNI, limitations on our ability to perform meaningful audit and evaluation work persist. Specifically, we routinely request and receive substantive threat briefings and copies of finished intelligence products prepared under the ODNI, and we meet with officials from the ODNI and obtain information about some of their activities. We also receive the ODNI agency comments and security reviews on most of our draft reports, as appropriate. However, since some members of the Intelligence Community have taken the position that the congressional intelligence committees have exclusive oversight authority, we do not audit or evaluate any programs or activities of the ODNI, nor are we able to verify or corroborate factual briefings or information provided. This resistance to providing us access to information has taken on new prominence and is of greater concern in the post-9/11 context, especially since the Director of National Intelligence has been assigned responsibilities addressing issues that extend well beyond traditional intelligence activities. For example, the ODNI and the National Counterterrorism Center refused to provide us security-related cost data for the 2006 Olympic Winter Games in Turin, Italy, although we were provided this type of data in prior reviews of the Olympic Games. If we continue to experience limitation on the types and amounts of information we can obtain from the Intelligence Community, then GAO may not be able to provide Congress with an independent, fact-based evaluation of the new security clearance process during its development and, later, its implementation. Either of these actions could occur without legislation. If the ODNI were to take leadership or oversight responsibilities for governmentwide personnel security clearances, it might be prudent to incorporate some legislative provision to reinforce GAO’s access to the information needed to conduct audits and reviews in the personnel security clearance area. Finally, GAO supports S. 82 and we would be well-positioned to provide Congress with an independent, fact-based evaluation of Intelligence Community management reform initiatives with the support of Congress and S. 82. Specifically, S. 82 would, if enacted, reaffirm GAO’s authority, under existing statutory provisions, to audit and evaluate financial transactions, programs, and activities of elements of the Intelligence Community, and to access records necessary for such audits and evaluations. GAO has clear audit and access authority with respect to elements of the Intelligence Community, subject to a few limited exceptions. However, since 1988, DOJ and some members of the Intelligence Community have questioned GAO’s authority in this area. In addition, for many years, the executive branch has not provided GAO with the level of cooperation needed to conduct meaningful reviews of elements of the Intelligence Community. As previously noted, this issue has taken on new prominence and is of greater concern in the post-9/11 context, especially since the Director of National Intelligence has been assigned responsibilities addressing issues that extend well beyond traditional intelligence activities, such as information sharing. The implications of executive branch resistance to GAO’s work in the intelligence area were highlighted when the ODNI refused to comment on GAO’s March 2006 report involving the government’s information-sharing efforts, maintaining that DOJ had “previously advised” that “the review of intelligence activities is beyond the GAO’s purview.” We strongly disagree with this view. GAO has broad statutory authorities to audit and evaluate agency financial transactions, programs, and activities, and these authorities apply to reviews of elements of the Intelligence Community. Importantly, S. 82, in reaffirming GAO’s authorities, recognizes that GAO may conduct reviews, requested by relevant committees of jurisdiction, of matters relating to the management and administration of elements of the Intelligence Community in areas such as strategic planning, financial management, information technology, human capital, knowledge management, information sharing, organizational transformation and management reforms, and collaboration practices. In recognition of the heightened level of sensitivity of audits and evaluations relating to intelligence sources and methods or covert actions, this bill would restrict GAO audits and evaluations of intelligence sources and methods or covert actions to those requested by the intelligence committees or congressional majority or minority leaders. In addition, in the context of reviews relating to intelligence sources and methods or covert actions, the bill contains several information security-related provisions. The bill includes, for example, provisions (1) requiring GAO to perform our work and use agency documents in facilities provided by the audited agencies; (2) requiring GAO to establish, after consultation with the Select Committee on Intelligence of the Senate and the Permanent Select Committee on Intelligence of the House of Representatives, procedures to protect such classified and other sensitive information from unauthorized disclosure; and (3) limiting GAO’s reporting of results of such audits and evaluations strictly to the original requester, the Director of National Intelligence, and the head of the relevant element of the Intelligence Community. In our view, Congress should consider amending the bill language to include the intelligence committees in these reporting provisions when the congressional leadership is the original requester. The reaffirmation provisions in the bill should help to ensure that GAO’s audit and access authorities are not misconstrued in the future. One particularly helpful provision in this regard is the proposed new section 3523a(e) of title 31, specifying that no “provision of law shall be construed as restricting or limiting the authority of the Comptroller General to audit and evaluate, or obtain access to the records of, elements of the intelligence community absent specific statutory language restricting or limiting such audits, evaluations, or access to records.” This provision makes clear that, unless otherwise specified by law, GAO has the right to evaluate and access the records of elements of the Intelligence Community pursuant to its authorities in title 31 of the United States Code. Chairman Akaka, Senator Voinovich, and Members of the Subcommittee, this concludes my prepared testimony. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information regarding this testimony, please contact Davi M. D’Agostino, Director, Defense Capabilities and Management, at (202) 512- 5431 or dagostinod@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Mark A. Pross, Assistant Director; Tommy Baril; Cristina T. Chaplain; Jack E. Edwards; Brenda S. Farrell; Robert N. Goldenkoff; John P. Hutton; Julia C. Matta; Erika A. Prochaska; John Van Schaik; Sarah E. Veale; and Cheryl A. Weissman. GAO Strategic Plan 2007-2012. GAO-07-1SP. Washington, D.C.: March 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: October 21, 2005. Defense Management: Key Elements Needed to Successfully Transform DOD Business Operations. GAO-05-629T. Washington, D.C.: April 28, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Human Capital: Federal Workforce Challenges in the 21st Century. GAO-07-556T. Washington, D.C.: March 6, 2007. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. High-Risk Series: Strategic Human Capital Management. GAO-03-120. Washington, D.C.: January 2003. Human Capital: DOD Needs Better Internal Controls and Visibility over Costs for Implementing Its National Security Personnel System. GAO-07-851. Washington, D.C.: July 16, 2007. Human Capital: Observations on Final Regulations for DOD’s National Security Personnel System. GAO-06-227T. Washington, D.C.: November 17, 2005. Human Capital: DOD’s National Security Personnel System Faces Implementation Challenges. GAO-05-730. Washington, D.C.: July 14, 2005. Military Operations: Implementation of Existing Guidance and Other Actions Needed to Improve DOD’s Oversight and Management of Contractors in Future Operations. GAO-08-436T. Washington, D.C.: January 24, 2008. Federal-Aid Highways: Increased Reliance on Contractors Can Pose Oversight Challenges for Federal and State Officials. GAO-08-198. Washington, D.C.: January 8, 2008. Department of Homeland Security: Improved Assessment and Oversight Needed to Manage Risk of Contracting for Selected Services. GAO-07-990. Washington, D.C.: September 17, 2007. Federal Acquisitions and Contracting: Systemic Challenges Need Attention. GAO-07-1098T. Washington, D.C.: July 17, 2007. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD’s Acquisition of Services. GAO-07-832T. Washington, D.C.: May 10, 2007. Personnel Clearances: Key Factors to Consider in Efforts to Reform the Security Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: Feb. 9, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
For decades, GAO has assisted Congress in its oversight role and helped federal departments and agencies with disparate missions to improve the economy, efficiency, and effectiveness of their operations. GAO's work provides important insight on matters such as best practices to be shared and benchmarked and how government and nongovernmental partners can become better aligned to achieve important outcomes for the nation. In addition, GAO provides Congress with foresight by highlighting the long-term implications of today's decisions and identifying key trends and emerging challenges facing our nation before they reach crisis proportions. For this hearing, GAO was asked to (1) highlight governmentwide issues where it has made a major contribution to oversight and could assist the intelligence and other congressional committees in their oversight of the Intelligence Community, and (2) comment on the potential impact on GAO's access to perform audit work on personnel security clearances if the Office of the Director of National Intelligence (ODNI) were to assume management of this issue from the Office of Management and Budget (OMB). Given historical challenges to GAO's ability to audit the Intelligence Community's programs and activities, this testimony also discusses GAO's views on Senate bill S. 82, known as the Intelligence Community Audit Act of 2007. GAO has considerable experience in addressing governmentwide management challenges, including such areas as human capital, acquisition, information technology, strategic planning, organizational alignment, and financial and knowledge management, and has identified many opportunities to improve agencies' economy, efficiency, and effectiveness, and the need for interagency collaboration in addressing 21st century challenges. For example, over the years, GAO has addressed the human capital issues, such as acquiring, developing, and retaining talent; strategic workforce planning; building results-oriented cultures; pay for performance; contractors in the workforce; and personnel security clearances, which affect all federal agencies, including the Intelligence Community. Furthermore, GAO identified delays and other impediments in the Department of Defense's (DOD) personnel security clearance program, which also maintains clearances for intelligence agencies within DOD. GAO designated human capital transformation and personnel security clearances as high-risk areas. GAO also recently issued reports addressing Intelligence Community-related management issues, including intelligence, surveillance, and reconnaissance; space acquisitions; and the space acquisition workforce. If ODNI were to assume management responsibilities over security clearances across the federal government, GAO's ability to continue monitoring this area and provide Congress information for its oversight role could be adversely affected. In 2006, OMB's Deputy Director for Management suggested that OMB's oversight role of the governmentwide security clearance process might be transferred to the ODNI. GAO has established and maintained a relatively positive working relationship with the ODNI, but limitations on GAO's ability to perform meaningful audit and evaluation work persist. While GAO has the legal authority to audit the personnel security clearance area, if the ODNI were to assume management responsibilities over this issue, then it may be prudent to incorporate some legislative provision to reinforce GAO's access to information needed to conduct such audits and reviews. GAO supports S. 82 and believes that if it is enacted, the agency would be well-positioned to assist Congress in oversight of Intelligence Community management reforms. S. 82 would reaffirm GAO's existing statutory authority to audit and evaluate financial transactions, programs, and activities of elements of the Intelligence Community, and to access records necessary for such audits and evaluations. GAO has clear audit and access authority with respect to elements of the Intelligence Community, subject to a few limited exceptions. However, for many years, the executive branch has not provided GAO the level of cooperation needed to conduct meaningful reviews of elements of the Intelligence Community. This issue has taken on new prominence and is of greater concern in the post-9/11 context, especially since the ODNI's responsibilities extend well beyond traditional intelligence activities. The reaffirmation provisions in the bill should help to ensure that GAO's audit and access authorities are not misconstrued in the future.
The adult limited English proficient population in the United States is diverse regarding immigration status, country of origin, educational background, literacy in native language, age, and family status. Generally, adults with limited English proficiency have immigrated to the United States and include legal permanent residents, naturalized citizens, refugees, and undocumented individuals, but some of these adults are native born. The largest numbers of foreign-born persons living in the United States are from Mexico, China, and the Philippines. According to ACS data from the U.S. Census Bureau, in 2007, about two-thirds of the adults who reported limited English speaking ability were Spanish speaking. In terms of educational attainment, in 2007, 27 percent of foreign-born adults had at least a bachelor’s degree, similar to the native- born population. However, native-born persons are significantly more likely than foreign-born persons in the United States to have graduated from high school (88 percent versus 68 percent). Limited English proficiency, by itself, is not necessarily an indicator of demand for instructional services. For various reasons, at any given time, some adults with limited English proficiency are not actively seeking English language instruction. One source of information, the 1995 NHES, estimated that about one-half (44 percent) of the adults who read English less than well were either participating in English language classes or interested in doing so, while the remainder were not. The survey did not inquire about why some adults were not interested, but potential reasons for not actively seeking instruction include the belief that participation is impractical in the midst of competing work or family responsibilities, lack of need for additional English to perform daily activities, or lack of success in past efforts. In addition, persons who are interested in English language classes may not participate because they face barriers. In the 1995 NHES, 30.5 percent of adults with limited English proficiency had not taken an English language class in the last 12 months, even though they expressed interest in doing so. These adult respondents reported they did not take classes because they were unaware of offerings, did not have enough time or money, or were limited by child care or transportation barriers. There is broad consensus among academics that very limited scientifically based research has been conducted to identify effective approaches to adult English language instruction. Much research in the field has focused on the challenges faced by adult English language learners and the factors that affect the learners’ ability to master English. Such factors may include educational attainment and literacy in the learners’ native language. Additional factors that may pose challenges include economic issues, such as the competing priorities of work and family and a lack of transportation and child care; cultural background; age; and motivational challenges. Because there appear to be differences between language learning in the early years and language learning that occurs in adulthood, the needs of adult learners and effective approaches may not be similar to those for students in grades K-12 education. While existing research is limited, some entities have played a role in providing or developing research-based information to providers and instructors. In the past, IES funded dissemination of research on adult literacy through the National Center for the Study of Adult Learning and Literacy (NCSALL). However, funding for NCSALL ended in 2007. Education supports dissemination of research through a contract with the Center for Adult English Language Acquisition, which has disseminated research-based resources for more effective adult English language instruction through its Web site. NIFL, a federal agency, serves as a national resource on literacy across all age groups. NIFL was established in 1991 and was reauthorized by the Workforce Investment Act of 1998 (WIA), and its role was expanded by the No Child Left Behind Act of 2001 to help children, youth, and adults learn to read by supporting and disseminating scientifically based reading research. The Adult Education State Grant Program funds English language instruction as well as adult basic education and adult secondary education, and was established under the Adult Education and Family Literacy Act (AEFLA), as title II of WIA. Eligible participants are those ages 16 and over who are not currently enrolled or required by state law to be enrolled in secondary school and who lack the basic skills needed to function effectively in their daily lives, a high school credential, or English language skills. In fiscal year 2007, the total federal allocation for the Adult Education State Grant Program, for all components of instruction, was about $564 million. Congress reserves a portion of the state grant funding—$68 million in 2007—for EL Civics, which supports integrated English literacy and civics education services to immigrants and other limited English populations. In addition, the American Recovery and Reinvestment Act of 2009 provided $53.6 billion in appropriations for the State Fiscal Stabilization Fund to be administered by Education. School districts may use a portion of the stabilization funds for any allowable purpose under AEFLA as well as the Elementary and Secondary Education Act, the Individuals with Disabilities Education Act, or the Carl D. Perkins Career and Technical Education Improvement Act of 2006 (Perkins IV). Under the Adult Education State Grant Program, states fund English language instruction through various types of providers that offer instruction for free or for a nominal fee. The Adult Education State Grant Program is administered by Education’s Division of Adult Education and Literacy within the Office of Vocational and Adult Education (OVAE). Program funds are distributed by formula to states using Census Bureau data on the number of adults (ages 16 and older) in each state who lack a high school diploma or its recognized equivalent and who are not enrolled or required by state law to be enrolled in school. Twenty-five percent of the expenditures for adult education in each state must come from state or local matching funds. States award a minimum of 82.5 percent of their federal grants to local providers of adult education, and may retain up to 12.5 percent for state leadership activities to be used for program improvement and 5.0 percent for administrative expenses. Education is also tasked with carrying out national leadership activities to enhance the quality of adult education and literacy programs nationwide. Such activities may include providing technical assistance to adult education providers, carrying out demonstration programs, and supporting research. The states report outcomes for adult English language learners participating in the Adult Education State Grant Program to Education’s NRS using a six-level system that describes mastery of different aspects of English language skills. The percentage of learners who achieved level gains in 2007 was 38.9 percent. In comparison, 31.8 percent of learners did not achieve a level gain during the enrollment year, but remained in the program, and 29.4 percent separated from the program in 2007 before achieving an educational-level gain. Providers of adult English language learning have varied characteristics and instructional formats and may be supported by many different funding sources. Instruction varies in format, intensity, setting, and focus—such as civics, family, or work-focused topics. Classes may have open or closed enrollment, have varied frequency and hours, and take place in large classroom settings, in small groups, or one-on-one with volunteers. Providers receiving federal funds through the Adult Education State Grant Program include local education agencies (school districts), community colleges, community-based organizations (CBO), and correctional institutions. According to a 2002 survey funded by Education, of providers receiving Adult Education State Grant Program funds, English language learners were a larger percentage of all adult education learners who attended classes sponsored by CBOs than by other provider types— over one-half of adult education learners in CBOs received English language instruction. According to the survey, providers reported receiving funding from a wide range of sources. One-third of providers reported receiving the majority of their funding from the federal government and almost one-half received the majority of funding from state government. Providers reported smaller proportions of funding from local government, private sources, and participant fees. CBOs reported receiving more financial support from a combination of foundation grants and corporate, civic, and individual giving than did other providers. Aside from publicly funded providers, English language learning is also privately supported by small faith-based organizations, such as churches, and by privately funded CBOs. English language learners may also access English language instruction from for-profit providers of self-paced materials and software and from some private industry associations or businesses that provide English language learning opportunities to their workers without federal support. According to data from the 2003 NAAL, among adults who learned English at age 16 or older (regardless of source of instruction), a higher proportion of those who reported past or current enrollment in English language programs scored at least basic levels of literacy compared with those who had never been enrolled. Among adult English language learners who had never been enrolled in English language programs, 61 percent scored below basic prose literacy and 36 percent scored basic prose literacy. Census Bureau data indicate that the number of adults in the United States who speak limited English has grown since 2000. According to the 2007 ACS, about 21.7 million adults who reported speaking a language other than English at home also reported speaking limited English, an increase from 17.8 million in 2000 (see fig. 1). The size of this population increased by 21.8 percent over this time period, and, as a percentage of the total U.S. to 9.5 adult population, it increased from about 8.5 percent in 2000 to 9.5 percent in 2007. percent in 2007. lih proficient population (in million) ACS 2007 data were the most recent data available at the time of our review. The distribution of reported English speaking ability among those reporting speaking another language at home changed little from 2000 to 2007. For example, in 2007, 4.3 million adults reported speaking no English at all. This represented 20 percent of all limited English proficient adults, which was relatively unchanged from the 18 percent this group comprised in 2000. In addition, the proportions of limited English proficient adults who reported speaking English “not well” (38 percent) and speaking English “well” but not “very well” (42 percent) were relatively unchanged from 2000 to 2007. The geographic distribution of the limited English proficient population mirrors the general population distribution in some respects; it is concentrated in the most populated states with some sizable representation in most other states (see fig. 2). However, some states have concentrations of limited English proficient persons higher than the state’s proportion of the U.S. population. For example, California, Florida, Illinois, New Jersey, New York, and Texas accounted for 68.1 percent of the national population of adults with limited English proficiency in 2007 and 39.4 percent of the national adult population. This handful of populous states and other southwestern states generally had the greatest concentrations of limited English proficient adults as a percentage of total adults (see fig. 3). However, among these states, there is variation in the concentration. For example, in 2007, about one in five adults in California spoke limited English, whereas one in nine adults spoke limited English in Illinois. Less populous states that have traditionally had smaller adult limited English proficient populations have had the greatest growth rates since 2000. From 2000 to 2007, some southern states with relatively small adult limited English proficient populations had the greatest growth rates, as shown in fig. 4. For example, Tennessee’s adult limited English proficient population was below the national median in 2000. However, it experienced about 46 percent growth from 2000 to 2007, moving it above the national median in 2007. In addition to Tennessee, other southern states like Alabama, Arkansas, and Georgia had large growth rates in their adult limited English proficient populations, as did Alaska, Arizona, and Nevada. However, states with the largest limited English populations experienced the greatest growth in sheer numbers. The full extent of participation in federally funded English language learning programs is unknown, but enrollment in the Adult Education State Grant Program, the federal grant program most directly associated with English language instruction, has remained relatively stable. As we discuss later in this report, we identified many federal programs within Education, HHS, and Labor for which funding may be used to support English language learning opportunities for adults. However, federal officials administering these programs reported that they do not collect national data on participation in English language instruction funded by the programs. Only the Adult Education State Grant Program collects and maintains enrollment data. In the Adult Education State Grant Program, reported enrollment in English language classes was stable from 2000 to 2007. Reported national enrollment was between 1.0 million and 1.2 million English language learners each reporting year from 2000 to 2007. Enrollment was 1.12 million in 2000 and 1.06 million in 2007, with small fluctuations over the years in between. Throughout this time period, national enrollment in the Adult Education State Grant Program was concentrated in lower literacy- level classes. Specifically, the greatest percentage of learners—70 percent to 75 percent—were in the lowest three levels of classes from 2000 to 2005 (Beginning Literacy to Low Intermediate), while 25 percent to 30 percent of learners were in the highest three levels (High Intermediate to High Advanced). While national enrollment in English language classes funded by the Adult Education State Grant Program remained stable, enrollment trends from 2000 to 2007 varied widely across states (see fig. 5). The median state reported an 11 percent decrease, with most states reporting fluctuations no greater than 20 percent. However, changes ranged from a roughly 75 percent reduction to a 100 percent increase, with 10 states having fluctuations of more than 40 percent. These larger variations in enrollment were not reflective of trends in the adult limited English proficient populations or the general adult populations in these states. For example, among the 6 states experiencing the largest growth in the numbers of persons with limited English proficiency, 5 reported decreasing enrollments. Similarly, among the 6 states with the fastest growing limited English populations, 4 reported decreasing enrollments. State officials said enrollment in their states’ Adult Education State Grant Programs changed over time because of changes in state funding priorities, data management system changes, and other factors. Most of the state officials we interviewed said funding constraints limited the extent to which programs could expand, and some officials identified obtaining more funding to serve students as a top priority. Additionally, a few state officials with stable or declining enrollment said these trends were the result of improved data management systems or efforts to better validate data, which caused reported enrollments to appear stable or declining. States also identified the economy and natural disasters as other factors that resulted in stable or declining enrollment. In some of the states, officials whom we interviewed said immigration may have increased enrollment, while immigrants’ fears of accessing government services may have reduced enrollment. Both state officials and local providers with whom we spoke told us that stable enrollment in English language classes did not indicate stable demand. Of the 12 states we contacted, according to the NRS, most reported declining enrollment in their states’ Adult Education State Grant Programs. However, 8 of 12 state officials said that demand was increasing, and 3 said that demand remained the same. One state official said that enrollment would grow exponentially if it kept pace with demand. Although many state officials reported increasing demand, waiting lists for entry into programs were not consistently used to track demand. Not all states required local providers to maintain waiting lists, and, in states without requirements, some local providers did not keep such lists. Some state officials cited their use of Census data as an indicator of demand to distribute resources. Federal support for adult English language learning is dispersed across a diverse array of programs within Education, HHS, and Labor, but most of the programs that allow it do so in support of other program goals, such as self-sufficiency, workforce attachment, or family literacy, and do not collect data that would indicate participation in or spending on adult English language learning. Of all the programs we reviewed, only the Adult Education State Grant Program is explicitly focused on adult English language learning. Administered by Education, this program provides English language learning as one of three program areas. In 2007, about 46 percent of the state grant program’s total enrollment was in English language instruction. However, even this program does not collect spending data specific to its English language learning component. The program recognizes learners’ multiple goals in learning English, such as employment, citizenship, and increased involvement in their children’s education, and, as we have previously mentioned, the federal program collects data from states on educational gains in English language classes. Other programs within Education, HHS, and Labor allow for English language learning, as shown in appendix II. However, according to federal officials responsible for administering these programs, none systematically collects data on spending or enrollment, and only Even Start, in addition to the Adult Education State Grant Program, collects data on outcomes specific to adult English language learning. Anecdotally, across the federal programs, some of the federal program officials with whom we spoke noted that some of their local grantees provide English language instruction to adult participants directly, while other grantees provide support indirectly by paying English language providers to instruct participants or referring participants to these providers. While the extent to which these numerous programs support English language learning for adults is unknown, during our site visits, we found various federal funding streams being used by some of the community colleges, CBOs, and public schools that we visited. Although most of the providers we visited drew on the Adult Education State Grant Program to support their English language learning activities, we also found other funding streams being used. For example, among all providers that used more than one funding stream, several providers received the Adult Education State Grant as well as refugee program funding streams. However, some providers used funds from as many as four or five federal programs. These federal programs—under which adult English language learning is allowable, but the extent of its use is unknown—vary greatly in purpose and focus. In HHS, the Office of Refugee Resettlement provides several funding streams that allow for English language learning. These funding streams include Refugee Social Services formula grants, Targeted Assistance Grants, and matching grants. While English language instruction is provided concurrently with other services, refugee agencies generally have just 8 months to place their clients in employment. Also within HHS, under the Head Start Program, English language learning for adults is allowable as a part of family literacy, and, under the Temporary Assistance to Needy Families (TANF) block grant, states may provide English language instruction as an activity that supports clients’ self- sufficiency, generally in the categories of job skills or education directly related to employment, or vocational education. Within Labor, English language instruction is allowable under key programs, such as Trade Adjustment Assistance, in which it may be provided with other services to retrain workers who have lost their jobs due to trade with foreign countries, and programs for Adults and Dislocated Workers under WIA’s title I. Other programs under this title, including the Job Corps and the National Farmworkers Jobs Program, also allow English language instruction, consistent with these programs’ training and employment missions. In addition, certain of Labor’s existing Community-Based and High Growth grants have incorporated English language learning to some degree (see app. III). See appendix IV for the methods used to provide English language instruction among the local grantees we visited that receive funds from these various Labor funding streams. Additionally, within Education, English language instruction is also allowed as remedial or developmental education within, for example, the Pell Grant program and certain Higher Education Act of 1965 programs. Education and HHS manage certain programs, such as Even Start and Head Start, that, while they serve children, may also reach adults through their family literacy activities, and these activities may include English language instruction. In addition, certain of Education’s other programs, such as those targeting after-school programs and migrant education, may also reach adults and include English language learning opportunities. In recent years, Education and Labor have developed some special initiatives that involve English language learning as a distinct focus (see table 1). Specifically, Education supported the development of a new distance learning Web site for English language learners, known as USA Learns, which became available in November 2008. Through its Career Connections demonstration, Education addressed the needs of high- skilled English learners, who participated in the funded projects along with other adult education students, by providing access to occupational training and English language learning opportunities. Education also plans to study those English language learners who are transitioning to adult basic education and adult secondary education programs in order to prepare for postsecondary education and the workforce—through an initiative known as Transitioning English Language Learners (TELL). For its part, Labor has undertaken a multifaceted initiative (the Limited English Proficiency and Hispanic Worker Initiative) that relies, in part, on the nation’s workforce centers, also known as One-Stop Career Centers (one-stops). Labor developed tools to help one-stops serve limited English clients: that is, it recalculated Census Bureau data on the limited English population by local workforce area and issued guidance for identifying this population’s needs. As part of this initiative, Labor issued several grants for English language learning in a workforce setting. In San Diego, for example, workforce-oriented English language instruction was provided to the new and existing employees of a large shipbuilder. Finally, Labor’s New Americans grants supported English language instruction at one-stops and promoted referrals to Adult Education State Grant Programs. Beyond these initiatives, federal agencies have also provided technical assistance related to English language learning in administering their standing grant programs, and, in Labor’s case, regarding one of its special initiatives. For example, within the Adult Education State Grant Program, Education has monitored states’ procedures for assessing English language learners’ proficiency and for reporting data on their gains, and has also provided training on using data for program improvement. Education has also disseminated information on 3 states’ approaches to performance-based funding. In addition to technical assistance aimed at the Adult Education State Grant Program overall, Education has, through a contractor, supported technical assistance that focused on areas such as the training needs of teachers who work with adult English language learners. Also, the Office of Refugee Resettlement has supported technical assistance to agencies serving refugees that addressed English language learning. Likewise, the National Office of Head Start has supported technical assistance to Head Start programs to inform them about English language learning opportunities through the Adult Education State Grant Program, according to an HHS official. For its part, Labor has sponsored a webinar on its Limited English Proficiency and Hispanic Worker Initiative and also has created a Web site and provided webinars for Job Corps Centers that serve English language learners. There has been some coordination among federal agencies on the subject of English language learning. Our previous work has highlighted the benefits of actions that federal agencies have taken to enhance and sustain their collaborative efforts, including the ability to leverage resources, improve quality, expand services, and reach more clients. Yet, while Education, HHS, and Labor all serve populations in need of language assistance, there is no ongoing mechanism to share information or expand and capitalize on the agencies’ individual efforts. The agencies have at times used interagency agreements to support English language learning for adults. For example, Education and the Department of Homeland Security’s Office of Citizenship have an interagency agreement to support a Web-based tool for lessons in civics- and citizenship-oriented English language learning, according to Homeland Security and Education officials. To promote mutual understanding of their programs, HHS’s Office of Refugee Resettlement and Labor’s Office of Workforce Investment temporarily placed employees in one another’s agencies and participated in each other’s conferences in 2008, with one result being a list of promising practices. Additionally, Labor officials said that they have begun to meet with Education officials to identify effective strategies for adult learning, and that adult English language learning would be included in this effort. Beyond these collaborations, there have been some interagency task forces established; however, generally these task forces have been temporary and have not focused on adult English language learning. For example, all three agencies, as well as other agencies, participated in an interagency Task Force on New Americans, created in response to a June 2006 executive order, and this task force issued a report that touched on English language learning and other issues. The task force, while still technically active, has not met since the issuance of the report in December 2008, according to a Homeland Security official. Also, in 2006, the agencies participated in the Interagency Coordination Group for Adult Literacy to focus on multiple objectives, including improving coordination, leveraging resources and reducing duplication among federal agencies and programs, sharing best practices, and helping states maximize the federal investment in adult education. The group supported the creation of a database of foundations supporting literacy efforts and developed Web- based adult literacy resources, and, according to an Education official, served as the starting point for an interagency group on strengthening adult education, created by an executive order in 2007, that fulfilled its mission with the issuance of a report in 2008. These short-term collaborative efforts point to the interest in and need for collaboration, and others have also identified the need for collaboration specific to adult English language learning. In 2006, NIFL convened a working group on English language learning that, in 2007, recommended to NIFL interagency coordination on adult English language learning “to facilitate collaborative work and information sharing” to better serve this population. However, as of the time of our review, according to a NIFL official, the recommendation had yet to be considered by NIFL. Additionally, we did not identify any federal agency that has been specifically tasked to coordinate information sharing on adult English language learning. Further coordination between and among the agencies is still uncertain, despite a common interest in English language learners’ employment and despite shared challenges in serving learners with certain characteristics. For example, Education and HHS’s Office of Refugee Resettlement have discussed but not developed an interagency agreement to provide local refugee programs with information on English learning resources, and no exchange of staff with Education has been discussed along the lines of what had been done with Labor. However, in technical comments on a draft of this report, Education indicated that it is open to collaboration with HHS, as well as other federal agencies, as appropriate. Coordination between Labor and Education on their respective initiatives has been variable. Although Education officials reported helping Labor with its Limited English Proficiency and Hispanic Worker Initiative, they had not involved Labor in Education’s employment- and training-related initiative, namely the Career Connections project. For its part, while Labor has provided technical assistance to one-stops and other stakeholders on working with the Adult Education State Grant Program, it has provided no guidance or technical assistance specifically regarding English language instruction, according to Labor officials. Furthermore, although HHS’s Office of Refugee Resettlement and Labor’s Office of Workforce Investment took temporary steps to coordinate, as we have previously discussed, an Office of Refugee Resettlement official said that it was unclear whether such coordination would be reinitiated, despite the benefits it provided in identifying additional resources available to refugees. The limited nature of federal efforts to coordinate is apparent in the agencies’ efforts to issue guidance and information that could help local providers identify both promising practices for providing English language instruction and additional resources in their communities for providing such instruction. While guidance can support efficient and effective coordination across programs, there has been no recent guidance from HHS, for example, to grantees of the refugee resettlement program for obtaining their language instruction resources through local collaboration, despite an official’s acknowledgment that the refugee program’s limited funding might require agencies serving refugees to tap additional resources. For the TANF program, HHS officials said guidance has been focused on how to count English language instruction as an activity, but not on how to identify and leverage local resources. Nor has the HHS Office of Community Services, which manages the Community Service Block Grant program, issued any guidance that would help local programs identify English instruction resources in their communities, according to a department official. Also, Labor’s update of Trade Adjustment Assistance guidance focused on the conditions under which English language instruction would be allowable, rather than resources for how to best provide instruction. Regarding Labor’s 2003 initiative instructing one-stop managers to develop plans for helping clients with limited English proficiency (LEP plans), the guidance offered no specific information on promising practices or information about local resources available through the Adult Education State Grant Program. Additionally, an official of the National Farmworkers Job Program said that this program has issued no guidance on this topic. An exception to the absence of information on resources and opportunities for local collaboration is Education’s Web site, “Community Partnerships for Adult Learning.” This Web site offers information on how to collaborate locally, based on 12 community profiles, and makes it possible to search for examples involving English language instruction. At the same time, however, we found that many local providers were unaware of Education’s USA Learns Web site providing English language instruction, despite federal efforts to publicize it. Although Labor did apprise its regional offices of this resource, 22 of the 28 farmworker program grantees whom we contacted were not aware of it, none of the Job Corps operators we contacted had heard of USA Learns, and an association of refugee agencies also was not acquainted with the Web site. Representatives of programs serving certain populations of English language learners, including refugees, farmworkers, and Job Corps students, said that greater coordination could benefit their clients by, for example, offering information about innovative practices, access to teacher training opportunities, and the efficient use of scarce resources. For example, certain agencies that serve refugees at the local level expressed interest in information about additional English language learning resources that could benefit refugees after their job placement. Additionally, an official of an association of refugee-serving agencies said that, while some refugee agencies might be aware of the Adult Education State Grant Program’s English language learning component, others might not or might have questions about refugees’ eligibility for it. This official also noted that refugee agencies would be likely to welcome information about additional English language learning opportunities for their clients, given scarce resources in the refugee system. A farmworkers’ program grantee said that the benefits of greater coordination could include access to updated and innovative materials, curricula, and teaching methods, as well as access to additional teacher training opportunities, while others pointed to access to additional resources. Among Job Corps Center managers with whom we spoke, the potential benefits cited included additional information for centers inexperienced in serving English language learners, additional information about promising instructional practices, and additional information about curricula that combine English language learning and occupational skills training. In addition, it is important to note, all three agencies serve subpopulations of English language learners who share some characteristics. For example, providers of services under the Adult Education State Grant Program and refugee funding streams, Job Corps Center managers, and officials of the farmworkers’ program all indicated the presence of beginning English learners among their clients, such as those who lack literacy in their primary language. Among those who mentioned this subpopulation, effectively and efficiently serving these learners was frequently described as challenging. In addition, some refugee-serving agencies told us that some refugees are highly educated—precisely the subpopulation targeted by several local programs through Education’s Career Connections initiative. States have supported adult English language learning in a variety of ways, particularly through the one federal program with an explicit focus on English language learning—the Adult Education State Grant Program—but also beyond this program. They have provided matching funds at various levels for this program and devised additional ways to enhance their support. Moreover, some states are addressing program quality through teacher qualifications and training, content standards, and other means and are developing mechanisms for local planning. Additionally, some states are coordinating with other programs. States and local providers are also taking steps to integrate English language instruction with occupational training. Furthermore, states are supplementing these activities with their own efforts to support English language instruction, such as through libraries and special schools. Some state agencies and local providers are exploring innovative practices and are carrying them out in a great variety of ways and venues, both within and beyond the Adult Education State Grant Program. Within the Adult Education State Grant Program, the 12 states that we contacted—states with either the largest or most rapidly growing limited English proficient populations—varied substantially in the amount of state funding they contributed. While most states did not distinguish the funding they provided for English language learning from the funding provided for other components of adult education, their financial contributions for adult education varied considerably. Specifically, state and local spending used to match Federal Fiscal Year 2005 funds ranged from the federally required 25 percent minimum in Tennessee and Texas to 88 percent of total spending in California and 90 percent in Florida. At least 2 states— California and New York—described current or planned reductions to their state contributions to the Adult Education State Grant Program. Meanwhile, officials for Arizona’s program said that their program has begun to track funding for English language learning separately, to provide a specific focus on such learning as a distinct activity. The states we contacted reported using a variety of considerations in allocating funding to local areas under the Adult Education State Grant Program, and some reported that they are beginning to use provider performance as a consideration. While Minnesota used factors such as instructional hours in allocating funds to local providers, other states— including Arizona, Florida, Illinois, and New Jersey—directed funding to local programs, at least in part, on the basis of the size of the local limited English proficient population, using Census Bureau data. Illinois further emphasizes need, according to a state official, by giving extra weight to the population least proficient in English. In terms of performance-based funding, while California adopted this funding approach after the passage of WIA in 1998, Illinois has considered local provider performance in distributing funding to local programs since 2005, according to officials in each state. According to a Florida official, that state is redesigning its funding formula to emphasize performance, beginning July 1, 2009. Also, Tennessee is also revising its formula to give greater weight to performance, with an anticipated implementation in 2010, according to an official from that state. Most of the 12 states we contacted through our semistructured telephone interviews also reported taking steps to improve the quality of English language teaching, such as by supporting professional development for English language teachers. Ten states had set minimum requirements for teaching English—typically, a state teacher’s license or a Bachelor of Arts degree—while 2 states had no specific teacher qualifications. Generally, however, in those states that had established qualifications, they were the same as those for other adult education teachers. Two of these states had or were developing qualifications specific to teachers of English language learners: California required a special credential for such teachers, and Arizona, according to state officials, was developing standards that would delineate specifically what teachers of English language learners need to know. Additionally, 1 state—Arkansas—requires certain providers to adhere to standards specifically for volunteers who work with English language learners through the Adult Education State Grant Program. To augment these minimum qualifications, most states addressed teachers’ training needs through professional development activities. Six states had set an annual minimum number of professional development hours, although this minimum varied widely, from 5 to 60 hours. Additionally, all but 1 of the 12 states reported using most of their Adult Education State Grant state leadership funds to finance their teachers’ professional development. For example, Arkansas, Illinois, and Nevada have used such funding for special centers, which can provide professional development opportunities for teachers of English language learners. Furthermore, 8 of the 12 states reported having adopted content standards to guide English language instruction. Among the reasons that these states cited for developing content standards was consistency of instruction statewide. States and local providers with whom we met also cited ways in which they were using NRS data on English language learners to improve service delivery. For example, in Washington State, the Adult Education State Grant Program agency officials said they discovered through reviewing program data that learners’ outcomes were lower in classes that were held at certain locations, and were subsequently able to make changes in those locations by addressing the needs of teachers, actions that the officials said eventually led to better results. Furthermore, this agency has developed a workshop for local providers to train them on how to use data for program improvement. At the local level, one provider in Washington State reported using the data to compare day and evening classes and make adjustments in their scheduling without adversely affecting outcomes. Moreover, officials of California’s Adult Education State Grant agency described using the data to determine that numbers of English language learners were not successfully transitioning to adult basic education, and worked closely with a technical assistance provider and held regional meetings to address this issue. Also within the Adult Education State Grant Program, states reported providing technical assistance to local providers, sponsoring special projects on a variety of topics, or taking other steps to address program quality. For example, Illinois provided training on its new content standards to local providers to support their curriculum development. Florida and New Jersey reported efforts to focus on beginning-level learners by providing special training and issuing targeted grants, respectively. In addition, California provided technical assistance to local programs to find ways to improve student retention. The state has also piloted an electronic English language assessment in certain locations to increase efficiency and reduce teachers’ burden in conducting written assessments. Additionally, Arizona has adopted stricter enrollment policies, a step described by state officials as part of their effort to address program quality for English language learners. Finally, Florida and California also supported provider efforts to offer distance learning opportunities for English language learners, and 5 other states are exploring distance learning applications for English language learning through a project sponsored by the University of Michigan. Mechanisms to guide and coordinate local service delivery have been developed in 2 of the 4 states that we visited—Illinois and Minnesota. According to a state official, Illinois has established about 30 Area Planning Councils across the state comprising a diverse array of providers that are required to meet twice a year and submit annual areawide service plans. These councils can encourage individual providers to focus on specific skill levels to minimize duplication of services. While Adult Education State Grant providers must belong to these councils, they may also include representatives from state agencies and the private sector, and, in some cases, agencies that serve populations outside the Adult Education State Grant Program. Meanwhile, Minnesota relies on 53 local consortia of providers for local service coordination, and requires them to submit comprehensive plans every 5 years. For example, the St. Paul Community Literacy Consortium includes both public schools and CBOs; according to state officials, the public schools generally serve more advanced learners, while the CBOs serve more beginning-level learners. In addition to facilitating the targeting of resources in this way, the consortium structure has, according to a consortium official, allowed individual providers to work together to respond to emerging trends and explore common interests, such as the uses of technology for English language learners. Some state agencies that manage the Adult Education State Grant Program and the local providers they support have taken steps to coordinate with other federal- and state-funded programs that serve populations likely to need this help—particularly refugees, those seeking assistance through one-stops, and those receiving public financial support. For example, Washington State has established an “LEP Pathway” that refers refugees and TANF clients to providers of English language instruction. According to state officials, many, although not all of these providers, also receive funding from the state’s Adult Education State Grant Program agency. According to state officials, the LEP Pathway has helped ensure timely and culturally appropriate services for refugees, particularly for the majority who are beginning-level English speakers, and given the state a flexible way to respond to changes in refugee flows from different countries and primary languages. In Minnesota, the state agency that administers both TANF and services for refugees uses a state-funded family stabilization program to serve most limited English clients, which serves these clients for 1 year to address a variety of barriers to immediate employment, including limited English. Additionally, Minnesota’s refugee program has transferred funding to its Adult Education State Grant Program agency, to secure seats in English language classes for refugees within the relatively short period before they are placed in employment. In Florida, the refugee Program contracts with local Adult Education State Grant Program providers for English language instruction, according to a state official. By contrast, Nevada’s Adult Education State Grant Program agency has provided funding to that state’s refugee agency, as one of several English language providers. Among the 12 states we contacted through semistructured telephone interviews, 6 reported formal, state-level coordination between the Adult Education State Grant Program and the TANF program. For example, Arkansas officials reported that this coordination helped target learners at the beginning levels. Texas officials reported that such coordination helped prevent duplication of effort and facilitated the cotraining of staff from both the Adult Education State Grant Program and the TANF program. None of the 12 states, on the other hand, reported formal coordination at the state level between agencies administering the Adult Education State Grant Program and those administering services for refugees. Furthermore, of the 12 states we contacted through semistructured telephone interviews, 8 reported formal, state-level coordination between the Adult Education State Grant Program and the state agency that administers the one-stop system. For example, New York’s Adult Education State Grant Program officials said that English language instruction is available at all one-stops in New York City. Other states that reported English language instruction on-site at one-stops were Alaska and Tennessee. Beyond these 12 states, Minnesota’s Adult Education State Grant Program specifically requires all local providers to establish formal agreements with their local one-stops to include help for English language learners, as well as other adult education clients, such as those needing basic skills. While Georgia officials did not report formal, state-level coordination, they did report that such coordination, including the co- location of services, occurs on the local level. States reported that their state-level coordination with the one-stop system involved functions such as assessment (Arkansas and Texas), improved referral (Arizona), and a special pilot in 12 sites to electronically assess both literacy and job skills (California). State officials also attested to some benefits from this formal, state-level coordination between the two programs. In Tennessee, officials said this coordination provided better services for clients and reduced the burden of filling out multiple forms in multiple locations, while Texas officials said that it has helped provide access to work and training programs. Meanwhile, some states reported coordination with other federal or federally supported programs, such as Even Start, postsecondary education, and the federal program for farmworkers. For example, Illinois and Texas reported state-level coordination between English language learning under their adult education programs and the Even Start program, a family literacy program administered by Education. Illinois officials reported that its Even Start program has a representative on an adult education advisory board, in an effort to ensure that the programs’ policies are consistent. Additionally, Adult Education State Grant Program agencies in Arizona, Illinois, Minnesota, and New York reported initiatives that focused on transitioning English language learners to postsecondary education. On another front, Florida’s farmworkers’ program is housed within the same division of the state education department as the Adult Education State Grant Program. According to a state official, coordination between the two programs has reduced testing costs for the farmworkers’ program, allowed the farmworkers’ program to focus on its primary mission of employment, provided access to information about promising practices in English language instruction, facilitated joint efforts to serve beginning-level learners, and created opportunities for program clients to continue their training. However, such coordination efforts were not universal, and some providers, particularly refugee agencies in California and Washington State, said they did not know how to access or acquire additional resources through the Adult Education State Grant Program, despite, in some cases, expressing a need for such additional resources. Furthermore, officials of one of these refugee-serving agencies said that it would be prohibitively expensive for the agency to pay Adult Education State Grant Program providers to secure seats for refugees in their classes. In a variety of settings, a number of states are combining occupational training with English language instruction to support local workforce development and to improve the ability of new English speakers to gain employment. In 2004, Washington State began to merge English instruction with occupational instruction in its community college classrooms as a pilot program. The project was designed to shorten the time that it was taking new learners to progress from mastering English to mastering an occupational skill. According to state officials, a sequential approach had required as long as 7 or 8 years, in some cases. Today, Washington State has adopted the dual approach of the pilot program for its occupational curricula at community colleges and expanded this approach statewide. Under this program, called I-BEST, or the Integrated Basic Education and Skills Training Initiative, each classroom has both an occupational skills teacher and a basic skills teacher, who may be an English language instructor. While the particular occupational tracks at the community colleges vary, each reflects jobs that are in demand locally, according to state officials. Occupational programs are available, for example, for English language learners who seek to become nursing assistants, medical assistants, phlebotomists, automotive technicians, welders, accountants, and advanced manufacturing workers, among other occupations. In May of 2009, an evaluation of I-BEST reported better educational outcomes for participants, including English language learners, compared with nonparticipants. Illinois and Minnesota, which we also visited, as well as Indiana, Ohio, and Wisconsin have been exploring other approaches to integrating English and occupational training under the Joyce Foundation’s Shifting Gears initiative. Certain states we contacted had targeted English language learners in high-demand occupations in other ways. Minnesota’s workforce agency has used a state-funded program to support workforce-oriented English language learning with projects that required employers to provide matching funds. To date, the program has sponsored projects in occupational fields such as manufacturing, health care, food processing, hospitality, and horticulture. In addition, its workforce agency and its department of education, which manages the Adult Education State Grant Program, have collaborated on 14 projects, some of which integrate English language learning in fields such as manufacturing and health care. All 14 projects will be evaluated, according to state officials. In Texas, the Adult Education State Grant Program and workforce agencies have collaborated to develop industry-specific curricula for English language learning in the fields of services, manufacturing, and health care. Florida is planning to refine its existing curriculum in order to make it industry- specific, according to a state official. In addition, Arizona has used federal incentive funding for health care education and training for limited English proficient and other low-skilled adults. Some local providers of adult education programs have also responded to employer requests for customized English language instruction for their employees. An Illinois community college, for example, provided classes to various companies, including a printing company, often with support from certain city and state grants. At the state level, Illinois has a program to support such workplace-based activities that serve English language learners and others with literacy needs, with employers paying part of the cost. Also, a California community college provided English lessons to culinary workers, and a California CBO provided safety-oriented English instruction to warehouse workers. However, some providers told us that their ability to contract with employers to provide such customized English language instruction depends on factors such as having enough people enrolled to meet costs, while accommodating different levels of English proficiency. In the course of our site visits, we visited a number of local providers involved in combining English language instruction with occupational training. These providers were involved with a wide range of industries and venues for training or retraining workers, and they used a wide range of funding sources (see table 2). For example, one community college provider in California placed an English language instructor in the same classroom with the occupational instructor, who taught advanced carpentry. In other cases, to accommodate workers’ schedules, providers delivered English language and occupational instruction at different times, or—when it was delivered on-site—between shifts. Another model, used at community colleges, such as City College of San Francisco and Cerritos College in Norwalk, California, involves offering a “support course” with terms and concepts specific to certain occupations; college officials told us this English language support course may precede or follow the occupational course. Aside from their use of Adult Education State Grant Program funds, some states and local jurisdictions have supported English language learning through additional programs of their own, such as through state literacy organizations, libraries, and special schools, and some states aim to offset employers’ costs by offering tax credits or other incentives. In 2007, California had enrolled some 466,000 adults in its own English language learning program for adults—almost as many as were enrolled (528,000) in its Adult Education State Grant Program. The state has also invested $50 million annually in its Community-based English Tutoring program, which officials said has, heretofore, reached about 1.5 million adults each year. New Jersey also funds a separate state-funded program to provide English language learning opportunities through the one-stop system, that, according to state officials, has reached about 6000 individuals annually. Also, Illinois has a state-funded program to provide civics- and citizenship- oriented English language instruction that it has funded at about $2 million annually. At the local level, New York City funds an initiative that serves about 30,000 English language learners annually, according to a city official. Family literacy programs, which can include English language instruction for parents as well as children, have also been an area of state and local activity. Illinois has such a program, which aims to serve those whose child care responsibilities may prevent them from accessing other services. According to a state official, the program was funded at $1.2 million in state fiscal year 2008 and served about 900 adult participants, the majority of whom were English language learners. A local agency in Los Angeles County has used revenue from a state tobacco tax to provide English language learning opportunities through family literacy activities. According to an agency official, this project served 688 adults in state fiscal year 2008. Additionally, local public schools in 75 locations across the country, including in Memphis, Tennessee, have developed family literacy programs that focus specifically on English language learners, with support from Toyota and the National Center for Family Literacy, according to a representative of the center. Other states have supported English language learning indirectly, by supporting the volunteers who work with English language learners and others enrolled in Adult Education State Grant Program activities. In Illinois, a state agency—the Office of the Secretary of State—has provided access to training and set standards for volunteers who work in these programs. By contrast, in Washington State, a private association that receives state funding fulfills these functions. In fact, when we asked about standards for volunteers, officials from 5 of the 12 states we contacted said that such standards had been set by entities other than the Adult Education State Grant Program. Public libraries have been another venue by which states and local governments have provided funds for English language learning. Officials of the California State Library, for example, told us that the Library has a program that reaches more advanced English language learners and some libraries in the state also use local resources, grants, and fund-raising to support their own English language learning activities. Officials of Arizona’s Adult Education State Grant Program also noted that their agency has transferred funding to the Arizona State Library to support services for English language learners. Some have estimated that a significant portion of public libraries across the country provide English language instruction. Additionally, in seven communities around the country, libraries and other entities, including some adult education providers, have begun to develop an Internet tool, known as the Learner Web, that can help adult English language learners access online and community resources. Public support for people learning English through their libraries was also augmented in 2008 with a grant from the American Library Association and the Dollar General Foundation, whichawarded one-time grants to 34 libraries in 18 states to better serve adultEnglish language learners. Also aside from activities associated with the Adult Education State Grant Program, some states have supported adult English language learning through special schools. For example, Washington State provides funding for a vocational school for farmworkers, the Community Agriculture Vocational Institute. According to the local farmworkers program director, the school incorporates workforce-oriented English language instruction as part of tractor, ladder, and pesticide safety classes. In Arizona, there are charter schools managed by both a National Farmworkers’ Jobs Program grantee and a Job Corps Center that provide English language instruction to young adults. In the District of Columbia, a charter school for adults, the Carlos Rosario International Public Charter School, combines English language instruction with occupational training in computer technology and culinary arts. Finally, a few of the state officials we interviewed reported that their states have devised incentives for employers to provide English language learning opportunities. According to state officials, employers in Florida and Georgia may claim a tax credit for providing training for their employees, and this training can include English language instruction. In New Jersey, according to state officials, employers can be reimbursed for one-half of their employees’ salaries while the employees are in training, including English language instruction. At the time of our review, Education had one research study under way to test the effectiveness of a particular approach to adult English language learning, and Education and Labor had some ongoing work related to adult English language learners. Education officials said that there had been little research on what approaches are effective for adult English language learning, and that there are limited federal funds for rigorous research. However, while agencies cited a few efforts to collaborate on specific projects, they had not coordinated research planning across agencies to systematically leverage research resources for increasing the knowledge base regarding adult English language learning. Education was funding a study, led by IES, evaluating the effectiveness of one instructional strategy for low-literacy English language learners. Funded using $6.9 million in AEFLA national leadership dollars over multiple years, the study’s final report is expected in the summer of 2010. The impetus for this research, according to Education officials, was that while English language learners made up the largest share of participants in the Adult Education State Grant Program, there had been little research on what approaches are effective for adult English language learners and few instructional strategies are available for low-literacy English language learners. The particular literacy textbook being tested, according to the study’s design report, was chosen on the basis of its consistency with characteristics identified in literature as promising, as well as through recommendations from experts in the field. Depending on the results of the study, Education officials said they expect that the results could be disseminated for use at the classroom level and could make classroom materials more research-based. Also at the time of our review, Education and Labor were doing analyses of the NAAL survey data looking at literacy levels of adults, including those of English language learners. Education’s OVAE and Labor’s Employment and Training Administration had a memorandum of understanding covering a contractor’s preparation of four issue briefs on the NAAL data, including one brief on the literacy of nonnative English speaking adults. According to Education and Labor officials, the briefs are expected to be released in the late summer of 2009. In addition to this joint effort, according to Labor officials, the contractor is using the NAAL data to prepare a separate report for Labor’s Employment and Training Administration, expected in early 2010, that will address the literacy of the working poor, workers in high-growth and declining occupations and industries, and nonnative English speaking workers, and address how this information may be utilized when serving these populations in the public workforce system. Separately, Education’s NCES was finalizing two studies, according to an NCES official, expected to be released in one report in 2009, examining the oral reading and contextual reading skills of adults with the lowest levels of literacy. The NCES official with whom we spoke about the studies said that the studies will discuss the results for different subgroups, including nonnative English speakers. Federal officials cited interest in identifying effective approaches to adult English language learning but said that little research on adult English language learning has been conducted or planned by federal agencies because of cost and competing priorities. However, officials did not identify steps to coordinate research planning on adult English language learning across agencies. Education officials said that there are limited funds for rigorous research and multiple research priorities within the department. Furthermore, officials noted that sound research takes years of investment and strategic planning. However, at the same time, officials from the agencies did not identify efforts to coordinate research planning across agencies on adult English language learning, which could help leverage resources used for research. For example, the NCES official responsible for the NAAL studies reported being unaware of Labor’s NAAL work at the time that we spoke, and asked for more information about Labor’s effort to avoid duplicating efforts. NIFL prepared a working document of research themes and priorities in adult literacy, with input from experts in the field, as well as Education’s OVAE. However, the document was submitted to its Interagency Group in January 2008 and, according to a NIFL official, no further action has occurred. In 2007 and 2008, two working groups identified the need for better collaboration across Education, HHS, Labor, and NIFL on adult education and English language learning research. In September 2007, a planning group, organized to help NIFL consider options for its future work on issues related to adult English language learners, recommended a system to coordinate research efforts on adult English language learner education across organizations and agencies to ensure that strong research methodologies are used and to develop a common knowledge base. However, implementation of this recommendation has not yet been considered by NIFL. Similarly, in July of 2008, the Interagency Adult Education Working Group, convened to fulfill Executive Order 13445, reported that there was no unified federal research agenda for adult education, and that, across Education, HHS, Labor, and NIFL, each entity invested in research addressing its individual programmatic needs without considering holistically what educators and policymakers need to know about adult learning. The group recommended greater collaboration in research planning efforts to leverage funds to invest in high-quality scientific research. Specifically, the group recommended that federal agencies meet annually to discuss current and planned research efforts to provide agencies with the opportunity to coordinate their efforts and permit them to plan joint research efforts when possible. In technical comments on a draft of this report, Education indicated that it intends to address the recommendations of the working group, but is “awaiting any final decisions until appropriate leadership positions at Education have been filled under the new administration.” The landscape for providing English language instruction to adults is multifaceted. In addition to the numerous federal programs identified in this report, English language instruction can also be provided by for-profit vendors, private employers, and volunteer organizations. Regarding federal support, there is a wide array of federal programs that may provide English language instruction to adults, yet little data on the extent to which these programs are providing English language instruction. Because they vary greatly in purpose and focus, it is understandable that these programs do not collect data on the extent of support for adult English language instruction; however, in our view, more coordinated information sharing across these programs and their agencies would have a number of possible benefits. Specifically, coordinated information sharing may help agencies assess the demand for services and find the best ways to deliver those services, help agencies discover inefficiencies in program operations and make improvements that may reduce program costs or increase the number of people served, and help to improve the quality of services by learning about the most effective way to deliver services and obtain positive outcomes. During our review, we found a few instances in which agencies shared information about their initiatives, but we also found instances of missed opportunities to use resources and information to benefit the missions of more than one agency. Similarly, during our review, we found that the agencies invested resources in research studies without taking steps to consider other research needs or plans across agencies. Greater collaboration in research planning could ensure that limited funds for research are put to the best possible use in a field in which there is little research indicating what is effective. Such planning efforts would allow agencies to think more globally about the needs and priorities for research in this area and could help to build a common base of knowledge to inform practitioners on effective approaches to English language instruction for adults. The speed with which adult English language learners acquire English proficiency not only affects the livelihood of these learners and their children, but also their ability to effectively participate in civic life. Without a more coordinated approach, the limited resources available to facilitate English language learning among those who seek it may not be used to their optimal benefit. To ensure that federal programs, states, and local providers are able to optimize resources and knowledge in providing adult English language instruction, we recommend that the Secretary of Education work with the Department of Health and Human Services, the Department of Labor, and other agencies as appropriate to develop a coordinated approach for routinely and systematically sharing information that can assist federal programs, states, and local providers in achieving efficient service provision. Such coordination may include the following activities: developing interagency agreements for sharing information on resources that states and local programs may leverage for adult English language learning, devising a plan for routinely sharing information on available technical reviewing the extent to which federal guidance assists local providers in leveraging resources, meeting regularly to discuss efforts under way in each agency and to consider potential for joint initiatives, or establishing clear time frames for the accomplishment of joint objectives. To ensure the most efficient use of available research resources and to inform practitioners and other stakeholders in the area of adult English language instruction, we recommend that the Secretary of Education work with the Department of Health and Human Services, the Department of Labor, and the National Institute for Literacy to implement a coordinated strategy for planning and conducting research on effective approaches to providing adult English language instruction and disseminating the research results. We provided a draft of this report to the Department of Education, the Department of Health and Human Services, the Department of Labor, and the National Institute for Literacy for review and comment. Education, HHS, and Labor provided written responses to this report (see apps. V, VI, and VII). The three agencies concurred with our recommendations. Education and Labor also provided technical comments, which we incorporated as appropriate. NIFL indicated that it had coordinated with Education, and had nothing to add to Education’s comments. In its formal comments, Education noted that the recommendations were consistent with those of the Interagency Adult Education Working Group, whose July 2008 report, pursuant to Executive Order 13445, identified the potential benefits of coordination at the federal level on adult education. Education also noted that a coordinated federal approach to research is necessary to address the most important issues in adult education, including English language learning, and would help ensure that the federal investment in research is optimized. Additionally, Education expressed the intent to pursue relevant opportunities for increased coordination with other federal agencies. HHS’s formal comments emphasized the need for broader resource mapping and coordination across all levels of government and nonprofit entities to ensure the successful delivery of English language instruction. Finally, Labor, in its formal comments, indicated that it agreed that a coordinated approach to sharing information and conducting planning and research is key to optimizing resources and knowledge in providing English language instruction. Labor added that it is committed to strengthening cooperation with Education and HHS. Additionally, in a separate e-mail, Labor indicated the concurrence of the National Office of Job Corps. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, the Secretary of Labor, the Secretary of Health and Human Services, the Director of NIFL, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or ashbyc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. Our review focused on (1) trends in the need for and enrollment in federally funded adult English language programs, (2) the nature of federal support for adult English language learning, (3) ways in which states and local public providers have supported English language programs for adults, and (4) federal agencies’ plans for research to identify effective approaches to adult English language learning. Overall, to address these research objectives, we selected three key federal agencies—the Departments of Education, Health and Human Services (HHS), and Labor—to be included in the scope of our review. We selected these agencies on the basis of their missions to administer education- and workforce-related programs. We also selected these agencies because of their mandate to collaborate with the National Institute for Literacy (NIFL), which is tasked with serving as a resource to support literacy—the development of reading and writing skills—across all age groups. To answer all of our research objectives, we also conducted state and local interviews in California, Illinois, Minnesota, and Washington State. We selected these states for our site visits because they provided a mix of large, adult limited English proficient populated states (California and Illinois) and high-growth states (Minnesota and Washington State). We also selected these states for diversity in administrative structures and practices under way regarding adult English language learning. For example, Minnesota’s and California’s Adult Education State Grant Programs are housed within their state education agencies, while Illinois’ and Washington State’s are housed in the community college agencies. In addition to these site visits, we selected 12 states for semistructured telephone interviews with state officials responsible for administering the Adult Education State Grant Program. Of these 12 states, 6 were selected because they had the largest adult limited English proficient populations in the nation in 2007 (California, Florida, Illinois, New Jersey, New York, and Texas), and the other 6 states were selected because they had the highest growth rates in their adult limited English proficient populations from 2000 to 2007 (Alaska, Arizona, Arkansas, Georgia, Nevada, and Tennessee). To determine the states with the largest and highest growth adult limited English proficient populations, we used U.S. Census Bureau data on the English speaking ability of adults ages 18 and over who speak a language other than English at home. Specifically, we used American Community Survey (ACS) data for 2007 to determine the largest adult limited English proficient populated states, and we used 2000 Census data and 2007 ACS data to determine the states with the highest growth. Together, the 12 states account for 75 percent of the national adult limited English proficient population and 75 percent of the Adult Education State Grant Program’s national enrollment in English language classes for 2007. In addition, we consulted with outside researchers, academics, industry associations, union representatives, and others—including the American Library Association, AFL-CIO, Asian-American Justice Center, Association of Farmworker Opportunity Programs, Catholic Legal Immigration Network, Center for Law and Social Policy, Institute for the Study of International Migration, Literacywork International, Migration Policy Institute, National Association of Manufacturers, National Council of State Directors of Adult Education, National Center for Family Literacy, National Coalition for Literacy, National Council of La Raza, National Job Corps Association, Pew Hispanic Center, Proliteracy, Refugee Council USA, and the U.S. Chamber of Commerce. To determine what is known about trends in the need and enrollment in federally funded programs, we reviewed and analyzed Census and ACS data on English language speaking ability for 2000 to 2007. Both the decennial Census and ACS collect self-reported information on the English language speaking ability of respondents who speak a language other than English at home. Specifically, respondents are asked whether they speak English “very well,” “well,” “not well,” or “not at all.” To assess the reliability of the Census Bureau data, we (1) reviewed Census Bureau documents and external literature on the reliability of the data and (2) met with internal GAO staff knowledgeable about the reliability of the Census Bureau data. We also reviewed Adult Education State Grant Program enrollment data for 2000 to 2007 reported in the Adult Education National Reporting System (NRS). To assess the reliability of data reported by Education, we (1) reviewed NRS implementation guidelines, (2) interviewed agency officials knowledgeable about the data, and (3) interviewed officials responsible for administering their Adult Education State Grants in the 14 states included in our review about procedures used to ensure the reliability of the data they report to the NRS. We determined that both the Census Bureau and NRS data were sufficiently reliable for the purposes of our report. However, it is important to note a few limitations of and modifications to the data. Regarding the Census Bureau data, the data are self-reported by respondents, and are not based on any standard assessment of speaking ability. Additionally, the data are limited to English speaking ability, and do not ask respondents to assess their abilities in reading or writing English. Regarding the NRS data, the definitions of the NRS English language levels changed in 2006. Specifically, the highest level was removed and one of the lowest levels was broken into two levels. We note this change when we discuss enrollment trends by level in the report. In addition, Education officials within the Office of Vocational and Adult Education (OVAE), as well as state officials responsible for administering their Adult Education State Grant programs, reported federal and state efforts to improve NRS data over the last several years. Specifically, OVAE also issued a data quality checklist for use by states to certify compliance with assessment policies and developed monitoring tools for OVAE monitoring site visits. OVAE and state officials reported training and technical assistance, and some of the state officials with whom we spoke reported state data systems that have improved their ability to ensure the data are reliable. It is also important to note that the NRS only includes data for programs funded by the Adult Education State Grant Program. We also reviewed information on adult literacy from the National Household Education Surveys (NHES) and the 2003 National Assessment of Adult Literacy (NAAL), both sponsored by the National Center for Education Statistics. To identify whether other federal programs that allow for adult English language learning have national enrollment data specific to such instruction, we also interviewed federal agency and program officials for agencies included in the scope of our review. To assess the nature of federal support, we identified federal programs that allow for adult English language learning within Education, HHS, and Labor. To do this, we began by interviewing federal agency officials about programs within their agencies supporting adult English language learning and reviewing the Catalog of Federal Domestic Assistance and other relevant literature. We reviewed federal laws and interviewed federal officials responsible for each program to verify that the programs allow for English language learning for adults and to learn about the extent that they collect data on spending and other data related to adult participation in English language instruction in their programs. We also identified some of the federal programs through interviews and data gathered from local providers of English language programs in the 4 states we visited, and corroborated this information with our review of the law and interviews with federal program officials. For the purposes of identifying programs, we generally defined adults as those who were at least age 16 and not enrolled in secondary school. The programs identified in this report may not capture all programs that support English language learning for adults within the three agencies. We reviewed agency strategic plans, and for the programs included in our review, performance reports and the Office of Management and Budget’s Performance Assessment Rating Tool. We interviewed Job Corps Center managers and obtained information from 28 National Farmworkers’ Job Program grantees about their experiences in serving English language learners. In addition, in the 4 states we visited, we also met with state program officials responsible for administering their Adult Education State Grant, Even Start, refugee and Temporary Assistance for Needy Families programs, and Workforce Investment Act of 1998 (WIA) title I programs. We visited multiple WIA one-stops, Even Start providers, a Head Start grantee, a Community Services Block Grant grantee, a Job Corps Center, a Youthbuild site, a National Farmworkers’ Job Program grantee, two community-based organizations (CBO) receiving Trade Adjustment Assistance funds, and grantees of special Labor initiatives. To determine ways in which states and local providers support English language learning for adults, we conducted semistructured telephone interviews with officials responsible for administering the Adult Education State Grants in the 12 states that we have previously mentioned. In the 4 states we visited, in addition to interviewing state officials responsible for administering federal programs as we discuss in the previous paragraph, we also interviewed providers of adult English language programs. In sum, we interviewed 16 CBOs, 11 community colleges, and 8 adult schools. In selecting providers to visit, we considered recommendations from state officials. We asked state officials responsible for administering their adult education and refugee programs to recommend local providers with the following criteria in mind: demonstrated effectiveness and cost-effectiveness, leveraged community resources or developed private partnerships, exhibited promising practices, or reduced waiting lists. We selected providers from their recommendations to get a range of different types of providers. These interviews focused on ways in which English language instruction is provided, spending and cost, coordination with other public and private entities, and challenges to supporting English language learning. To determine what federal research is planned in this area, we met with federal officials from Education, HHS, and Labor for the programs included in this review. We also met with the officials from the Institute of Education Sciences and NIFL to learn about ongoing research and research priorities regarding English language learning for adults, as well as efforts to coordinate research across the agencies. We also identified and reviewed published research in the field of adult English language learning. We conducted our review from May 2008 through July 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To assist adults to become literate and obtain the knowledge and skills necessary for employment and self-sufficiency; to assist adults who are parents to obtain the educational skills necessary to become full partners in the educational development of their children; and to assist adults in the completion of secondary school education. To help break the cycle of poverty and illiteracy and improve the educational opportunities of low-income families, by integrating early childhood education, adult literacy or adult basic education, and parenting education into a unified family literacy program. To help break the cycle of poverty and illiteracy and improve the educational opportunities of low-income families, by integrating early childhood education, adult literacy or adult basic education, and parenting education into a unified family literacy program. To assist migrant and seasonal farmworker students in obtaining the equivalent of a high school diploma and, subsequently, to begin postsecondary education, enter military service, or obtain employment. To help ensure access to high-quality postsecondary education by providing financial aid in the form of grants in an efficient, financially sound, and customer-responsive manner. To improve the education of limited English proficient children and youth by helping them to learn English and meet challenging state academic content and student academic achievement standards. To provide opportunities to establish or expand activities in community learning centers that provide academic enrichment and additional services to students who attend low-performing schools to help meet core academic achievement standards and to offer families of students opportunities for literacy and related educational development. To carry out a program of making grants and contracts designed to identify qualified individuals from disadvantaged backgrounds, to prepare them for a program of postsecondary education, to provide support services for such students who are pursuing programs of postsecondary education, to motivate and prepare students for doctoral programs, and to train individuals serving or preparing for service in programs and projects so designed. To support institutions of education in their effort to increase their self-sufficiency by improving academic programs, institutional management, and fiscal stability. To support institutions of education in their effort to increase their self-sufficiency by improving academic programs, institutional management, and fiscal stability. To improve the academic quality, institutional management, and fiscal stability of eligible institutions, to increase their self- sufficiency and strengthen their capacity to make a substantial contribution to the higher education resources of the nation. To enable institutions of higher education, combinations of such institutions, and other public and private nonprofit institutions and agencies to improve postsecondary education opportunities. To develop and carryout activities to improve and expand the institution’s capacity to serve Hispanic and other low-income students. To help refugees become economically self-sufficient. To help refugees become economically self-sufficient. To help refugees become economically self-sufficient within 120 to 180 days. To provide assistance to needy families; end dependence on government benefits by promoting job preparation, work, and marriage; prevent and reduce out-of-wedlock pregnancies; and encourage two-parent families. To promote the school readiness of low-income children by enhancing their cognitive, social, and emotional development. To reduce poverty, revitalize low-income communities, and empower low-income families and individuals to become fully self-sufficient. To provide workforce investment activities that increase the employment, retention, and earning of participants and increase the occupation skill attainment by participants, which will improve the quality of the workforce, reduce welfare dependency, and enhance the productivity and competitiveness of the nation’s economy. Adult education and literacy activities combined with occupational and job skills training (under training services) To strengthen the ability of eligible migrant and seasonal farm workers and their families to achieve economic self- sufficiency. To assist disadvantaged youth ages 16 to 24 in obtaining education and employment skills to achieve economic self- sufficiency; to foster leadership skills; and to expand the supply of affordable housing. To assist eligible youth ages 16 to 24 who need and can benefit from an intensive program operated in a group setting in residential and nonresidential centers, to become more responsible, employable, and productive citizens. To provide adjustment assistance to qualified workers adversely affected by foreign trade. To award grants to states that exceed performance levels of WIA title I, title II, and Perkins III to carry out innovative programs consistent with the requirements of each program. $837,424 Offered training that involved an English language learning component to 120 individuals in automotive technology. 494,386 Provided English language and occupational skills training to hospitality industry workers. 1,500,000 Supported occupational training and English courses to limited English proficient Job Corps participants to prepare them for health care careers. 2,762,496 Provided occupational training and English language instruction to meet the needs of health care employers in critical areas. 1,649,348 Builds on existing occupational program focused on the transportation sector, and provides remedial English language instruction for trainees whose primary language is not English. Site was itself an Adult Education State Grant In addition to visiting this Job Corps Center, we obtained information from officials who manage 41 Job Corps Centers in multiple states. These officials stated that the centers they manage provide English language instruction both directly, with their own resources, and indirectly, through other providers. In addition to visiting this grantee, we obtained information from 2 farmworkers’ program grantees, 27 of which offered or provided access to English language instruction. About one-half of these grantees provided instruction both directly, with their own resources, and indirectly, through relationships with Adult Education State Grant Program or other providers. This site provided English language instruction directly in the following two ways: through a vocational school for farmworkers and through an English language teacher hired directly, who led classes at a nearby one-stop. In addition to visiting this grantee, we conducted a telephone interview with another grantee who told us that the program had referred participants to a local community college for English language instruction, but was about to acquire language software to provide this service directly. When the Labor grant expired, this grantee applied for and received a grant from the Office of Refugee Resettlement to support English language instruction, according to grantee officials. English language instruction was provided at four one-stops. According to officials, most participants were referred to Adult Education State Grant providers. However, some instruction was provided at the one-stops by non-Adult Education State Grant community-based organizations. English language instruction was provided at 12 one-stops. Clients at the one-stops accessed commercially available English language software, with some support provided by one-stop staff, some of whom were former English language teachers, according to the officials. In addition, referrals were made to Adult Education State Grant Program providers. Cornelia M. Ashby, (202) 512-7215 or ashbyc@gao.gov. Betty Ward-Zukerman, Assistant Director, and Cady S. Panetta, Analyst-in- Charge, managed this report. Other staff who made key contributions to all aspects of the report include Chris Morehouse and Anthony Mercaldo. Alexandra Edwards and Meredith Trauner assisted with data collection. Craig Winslow provided legal assistance. Ashley McCall assisted in identifying relevant literature and background information. Ken Bombara, Ron Fecso, and Cindy Gilbert assisted with the methodology and statistical analysis. Sue Bernstein, Melinda Cordero, and Jena Sinkfield helped prepare the final report and the graphics.
Millions of adults in the U.S. report that they speak limited English, and English language ability appears linked to multiple dimensions of adult life, such as civic participation and workforce participation and mobility. GAO examined (1) the trends in the need for and enrollment in federally funded adult English language programs, (2) the nature of federal support for adult English language learning, (3) ways in which states and local public providers have supported English language programs for adults, and (4) federal agencies' plans for research to identify effective approaches to adult English language learning. To conduct this work, GAO analyzed Census and enrollment data and conducted interviews with federal officials within the Departments of Education, Health and Human Services (HHS), and Labor and the National Institute for Literacy (NIFL); semistructured telephone interviews with state adult education officials in 12 states; site visits to 4 states; and reviews of relevant laws and literature. The number of adults who speak English less than very well grew by 21.8 percent between 2000 and 2007, to roughly 22 million. The Adult Education State Grant Program, the key federal program for adult English language instruction, reported enrollment of about 1.1 million English language learners in 2007--which had remained relatively stable since 2000. However, most state adult education grantees we contacted reported increased demand. Also, there are many federal programs that allow for adult English language instruction for which national enrollment data are not collected. Federal support is dispersed across diverse programs in Education, HHS, and Labor that allow for English language learning in pursuit of other goals and do not collect data on participation in English language learning or the amount of federal funding that supports it. The agencies have undertaken initiatives and provided technical assistance. However, while there has been some collaboration among federal offices on behalf of English language learning, there is no ongoing mechanism to share information on resources or strategies to expand and capitalize on the agencies' individual efforts. States GAO contacted generally did not distinguish funding for English language learning from the other components of adult education, but they did vary greatly in the state matching funds contributed to their programs. GAO found states and local providers collaborating with other federal- and state-funded programs that serve populations likely to need this help. Yet such ef-forts to coordinate were not universal, and some local providers said they did not know how to access additional instructional or financial resources. States and local providers also supported English language learning in various ways. Education had one research study under way to test the effectiveness of an approach to adult English language learning, and Education and Labor had some ongoing work related to adult English language learners. Education officials said that there had been little research on what approaches are effective for adult English language learning, and noted that federal funds for rigorous research are limited. However, while agencies cited efforts to collaborate, they had not coordinated research planning across agencies to leverage research resources for adult English language learning.
“A basic objective of a modern workmen’s compensation program is to provide protection to workers against loss of income from work-related injuries and diseases. To achieve this goal, the program must carefully weigh the worker’s interest in substantial income benefits against factors such as the loss of incentive for rehabilitation, which some believe may occur if income benefits are too high.” The 1972 National Commission’s Report recommended that workers’ weekly benefits should replace at least 80 percent of their spendable weekly earnings, subject to a state’s maximum weekly benefit. As states increased workers’ compensation benefits following the National Commission’s report, an issue arose as to whether benefits were so high that incentives for injured employees to return to work might be impaired. Workers’ compensation program analysts are reluctant to take a position on what the “correct” level of workers’ compensation benefits should be, leaving that matter to the judgment of legislators. According to a 1985 Workers Compensation Research Institute report, legislatures in many states must walk a fine line between benefits that are high enough to provide adequate income, but not so high as to discourage an employee’s return to work when he or she is no longer disabled. In addition to discussions about the appropriateness of workers’ compensation programs’ benefit levels, some observers have made the point that beneficiaries with long-term or permanent disabilities who were injured early in their careers may have lost promotions or other opportunities to increase their pay relative to the compensation benefits they may be currently receiving. Under FECA, workers’ compensation benefits for those who are totally disabled are 66-2/3 percent of wages for workers without dependents and 75 percent of wages for workers with one or more dependents. These benefits are not subject to federal or state income taxes. Most states’ workers’ compensation programs provide benefits ranging from 60 to 72 percent of gross wages. Six states use a percentage of spendable earnings (ranging from 75 to 80 percent) rather than wages as the basis for computing compensation benefits. The Department of Labor’s Office of Workers’ Compensation Programs (OWCP) is responsible for administering FECA and adjudicating claims submitted on behalf of injured workers. For the year ending June 1997, FECA costs totaled about $1.9 billion—$1.3 billion for compensation benefits, $444 million for medical benefits, and $125 million for death benefits. For this period, OWCP paid medical benefits in about 238,450 cases, death benefits in over 6,260 cases, and compensation benefits in over 78,060 cases. Of these 78,060 cases, 51,265 were on the long-term rolls, as of June 1997. In these 51,265 cases, about 34,700 totally disabled individuals were receiving FECA wage-loss benefits at either the 66-2/3 or 75 percent rate. For the more than 23,250 beneficiaries included in our analyses, we estimated that FECA benefits replaced, on average, over 95 percent of the take-home pay they would have received had they not been injured. Figure 1 shows percentages of beneficiaries whose FECA benefits resulted in various ranges of take-home pay replacement rates. Beneficiaries’ estimated take-home pay replacement rates ranged from a low of about 76 percent to a high of 136 percent depending on when they were injured, their pay when injured, and whether they had dependents or lived in a state with an income tax. To calculate federal and state income taxes to use in computing beneficiaries’ take-home pay, we had to make assumptions regarding the amount of taxable income earned by a beneficiary’s spouse and the number of exemptions and amounts of deductions claimed for income tax purposes. Although OWCP’s automated databases identified beneficiaries receiving FECA dependent’s benefits, they did not contain information on spouses’ income, additional exemptions, or additional deductions. Under our assumptions, replacement rates were affected by (1) beneficiaries’ dates of injury, (2) pay levels and progressive income tax rate structures, (3) benefit rates based on the absence or presence of dependents, and (4) beneficiaries’ states of residence. The effects of these variables on replacement rates are summarized below and discussed in more detail in appendix II. In general, the older the date of injury, the higher the replacement rate. The older dates result in higher replacement rates because over long periods of time, FECA cost-of-living increases exceeded general schedule (GS) pay increases that individuals would have received had they not been injured. To illustrate in one case, a worker with an injury date just before March 1, 1996, would have received the March 1, 1997, FECA cost-of-living increase of 3.3 percent. Workers who had not been injured would have received a general schedule pay increase averaging 3 percent in January 1997. In another case, a worker injured in January 1970 would have received FECA cost-of-living increases through March 1997 and, in absolute numbers, these increases would have totaled 139.5 percent of compensation. General schedule pay increases for workers who had not been injured would have averaged 118.7 percent of pay over the same period. The replacement rate for a single person receiving FECA benefits of $20,000 in June 1997 in the first case would be 83.6 percent of take-home pay, whereas, in the second, older case, it would be 101.3 percent of take-home pay. Because the federal government and many states have progressive income tax rate structures, workers generally pay taxes at higher rates as their taxable income increases. In our analyses, applicable federal income tax rates ranged from 15 to 31 percent of taxable income and state income tax rates ranged from 0.5 to 9.3 percent of taxable income. For beneficiaries who earned higher pay, nontaxable FECA benefits replaced pay that would have been subject to higher tax rates. FECA benefits replaced an estimated 91 percent of take-home pay for beneficiaries whose pay before the injury, adjusted to 1997 pay levels, was under $20,000. For beneficiaries with pay over $60,000, FECA benefits replaced over 105 percent of take-home pay. Replacement rates for FECA beneficiaries receiving the dependent benefit averaged an estimated 97 percent compared with 92 percent for beneficiaries who did not receive this benefit. FECA authorizes an additional 8-1/3 percent in benefits for beneficiaries with dependents. If these additional benefits were not provided, some beneficiaries’ replacement rates would be lower because their take-home pay would be compared with a compensation benefit of 66-2/3 percent rather than 75 percent of gross pay. Replacement rates for beneficiaries who lived in states that taxed income were, on average, an estimated 96 percent compared with about 94 percent for those living in states with no income tax. Like federal income taxes, income taxes that workers paid to states before they were injured would serve to further reduce their take-home pay, thereby increasing the portion of take-home pay replaced by nontaxable FECA benefits. To calculate take-home pay replacement rates, we made certain assumptions about beneficiaries based on data that were readily available to us. The effects of using different assumptions on spouses’ income, numbers of exemptions, and amounts of deductions are summarized below and discussed here and in more detail in appendix II. Spouse’s income. In estimating replacement rates for beneficiaries with a spouse, we assumed that their spouses did not have taxable income. If spouses had income, replacement rates could be higher. The presence of a spouse’s income results in a higher effective rate of tax on the income earned by the beneficiary returning to work. A higher effective tax rate means that the returning worker’s take-home pay could be lower and, therefore, the ratio of FECA benefits to take-home pay could be higher. Number of dependents (exemptions). In computing federal income taxes, we assumed that beneficiaries who received augmented FECA benefits had one dependent and that the dependent was a spouse. In 1997, each exemption claimed was worth $2,650 in computing taxable income. Replacement rates would have decreased by about 1.5 percent for each additional exemption. For example, the replacement rate for a married worker with 1 child (3 exemptions), with income of $30,000, would have been 89.3 percent compared with 90.7 percent for a couple (2 exemptions). We did not assume additional exemptions for age or blindness. Tax deduction amounts. In computing income taxes, we assumed that beneficiaries would have claimed federal standard deduction amounts of either $4,150 if single, or $6,900 if married. If these individuals had itemized deductions that were either double or triple the standard deduction amounts, their take-home pay replacement rates would have been lower than our estimates by percentages ranging from about 2 to 7 percent depending on (1) whether they were single or married and (2) their pay before being injured. For beneficiaries who did not have taxable income while working either because they had low income, large deductions, or multiple dependents, replacement rates would have been about 73 percent if single, or about 82 percent if married. For beneficiaries who did not owe income taxes, their take-home pay would be gross pay minus deductions of 8.45 percent for retirement and Medicare benefits. The relationship between FECA benefits (either 66-2/3 or 75 percent of gross pay) and take-home pay would be the same (about 73 or 82 percent, respectively) at all pay levels. We were unable to determine whether beneficiaries’ career progression patterns were affected by their on-the-job injuries. Our analyses showed that about 70 percent of all beneficiaries were over 40 years old when they were injured, and the average adjusted pay of beneficiaries in the selected occupations approximated the average pay of active workers in the same occupations. These characteristics might suggest that the beneficiaries were not in the early stages of their careers at the time of their injuries. However, our analyses were limited because occupational data were available for only about one-third of the beneficiaries and because data were not readily available on beneficiaries’ career progression up to the time of their injuries. Career pattern information we obtained from agency officials for workers in selected occupations who were in the same occupations as selected FECA beneficiaries included in our analysis—letter carriers, postal distribution workers, registered nurses, practical nurses, nursing assistants, and air traffic controllers—indicated that career patterns can vary widely. These occupations were selected because they were either the occupations (1) that were the most frequently identified in the OWCP information we analyzed or (2) for which many beneficiaries were likely to be employed by the same agency. Career pattern information obtained from the above officials and information on FECA beneficiaries from OWCP’s records are discussed in the following sections. According to FECA data, at the time of injury, the average age for the 1,897 letter carriers and postal distribution workers we could identify was about 42 years old. The pay of these workers at the time of injury adjusted to 1997 pay levels averaged $35,054 and $36,588, respectively. According to Postal Service officials, workers in letter carrier and postal distribution crafts are covered under union contracts with Postal Service management. Entry-level pay in March 1997 for workers in these crafts was $26,375 and $22,404, respectively. Upon completing contractual waiting periods, these workers would automatically receive longevity-step increases. Workers would normally progress from entry-level pay to maximum pay within the same grade in 12.4 years. For letter carriers whose entry level is grade 5, maximum pay was $36,863 as of March 1997; for postal distribution workers whose entry level is grade 4, maximum pay was $35,118. In addition to their basic pay, these workers may also receive premium pay for night or Sunday work. Postal Service officials told us that most letter carriers and postal distribution workers remain in the same pay grade throughout their careers. They usually receive longevity-step pay increases and twice yearly cost-of-living increases. As of September 1997, almost 80 percent of about 52,650 postal distribution workers were at grade 4 and almost 50 percent of the 40,877 workers in this pay grade were in the highest step. For the approximately 201,500 letter carriers, about 85 percent (172,590) were at grade 5 and of these, over 70 percent (123,250) were in the highest step. As table 1 shows, the average ages and adjusted pay of the 445 beneficiaries in nursing occupations approximated the average ages and pay of both VA and non-VA nurses. The entry level for most of VA’s registered nurses in clinical practice is generally somewhere between the equivalent of a GS-6 and GS-8, according to a VA official familiar with the typical career patterns of VA nurses. Licensed practical nurses generally start at the equivalent of a GS-4, and nursing assistants are generally hired at the equivalent of a GS-3. According to the official, registered nurses with a bachelor of science degree generally advance to the equivalent of a GS-11 in 3 to 5 years; nurses without a bachelor’s degree generally advance to the equivalent of a GS-9. Nurses who reach the equivalent of a GS-12 would usually have a bachelor of science degree and function in positions with responsibilities beyond the staff nurse. These additional responsibilities would include being a nurse manager, head nurse, care manager, or instructor. Furthermore, for nurses to advance beyond the GS-12 level, they generally would have to have a master’s degree. In addition to clinical practice, some VA registered nurses become involved in education and training, administration, or research activities for which they would generally be paid at the GS-12 or GS-13 levels. According to VA pay information, about 700 of VA’s 32,600 registered nurses serve in executive, supervisory, or management positions with pay equivalents in the GS-14/15 range. The VA official told us that over a 3- to 5-year period, the highest grade to which VA’s nursing assistants would likely advance would be the equivalent of a GS-5. Most nursing assistants would be at the GS-4 level. Within about 2 years, VA’s licensed practical nurses could reach the equivalent of a GS-5 and within 4 to 5 years a GS-6. Most practical nurses would function at the GS-5 level. To receive higher pay, some nursing assistants would change career patterns and work as radiological or medical technicians, or as physical therapists. Some practical nurses return to school to become registered nurses or transfer to other VA departments. For the 74 beneficiaries we identified in ATC occupations, FECA information showed that at the time of injury, their average age was 39.4 and their average pay adjusted to 1997 pay levels was $68,074. According to 1997 OPM information, individuals in ATC occupations averaged almost 42 years of age with average pay of over $65,230. About 43 percent of these individuals were at the GS-14 level with average pay of about $74,750. According to an FAA official, most individuals in ATC occupations begin their careers as GS-7s. About 75 percent of these individuals have ATC responsibilities at either air route traffic control centers or at FAA towers or terminals. Other individuals in ATC occupations serve as flight service station specialists and have responsibility for providing pilots with weather briefings and receiving flight plans filed by airlines and pilots. The size and type of FAA facility at which air traffic controllers serve generally determine their typical career patterns. According to the official, controllers stationed at air route traffic control centers and the busier airports would generally reach the GS-14 level. Those serving at smaller airports would generally reach the GS-12 or GS-13 level depending on the amount of air traffic serviced by the facility. FECA and OPM information for individuals in additional occupations is shown in appendix III. For the more than 30,000 beneficiaries we profiled, annual compensation benefits averaged about $26,220, and the current value of their gross pay before they were injured averaged $34,833. About 70 percent of the beneficiaries were over 40 years old when injured. As of June 1997, about 65 percent were over 55 years old. About 73 percent of the beneficiaries had a spouse or at least one dependent. For about 90 percent of the 30,000 beneficiaries, the current value of their pay before they were injured was under $50,000 after adjusting for pay comparability increases. Figure 2 contains profile information on the percentages of beneficiaries with and without dependents, by age ranges when they were injured and as of June 1997, by amounts of annualized workers’ compensation benefits, and by amounts of pay received at the time of injury adjusted to 1997 pay levels. In addition, over 18 percent (5,549) of the more than 30,000 beneficiaries lived in states that did not have an income tax. As of June 1997, about 74 percent of the beneficiaries lived in the same state as the one where they were injured. We obtained written comments on a draft of this report from the Department of Labor. Labor commented that the report did a good job of describing the various assumptions and methodology we used to develop the replacement rate estimates and was very clear on how changes in each individual assumption would generally affect the replacement rates for classes of workers. Labor also suggested that our analysis might have been better informed, if instead of assuming that all beneficiaries receiving augmented benefits had a nonworking spouse, we could have used readily available data and statistical sampling techniques to develop replacement rate estimates that took into consideration the incidence of dual earners, the amounts of income earned by these couples, and estimates of the number and distribution of additional dependents by household. Labor added that in general it might have been more useful if we had offered some estimates based on likely combinations of assumptions and that varying assumptions one by one, while it illustrated an impact or tendency, was probably misleading when applied universally to all cases. In view of the time constraints we faced when we started this assignment, we chose to develop take-home pay replacement rates based on a methodology that was similar to those that had been used in other workers’ compensation studies, such as those conducted by the Workers Compensation Research Institute. We agree with Labor that it may have been possible to develop a more refined estimate of the overall replacement rate had we used other sources of information to make additional assumptions about FECA beneficiaries. We also agree that had we developed and analyzed likely combinations of other assumptions, we could have presented different estimates of take-home pay replacement rates. However, we believe that our methodology provided a useful overall replacement rate estimate that was based on reasonable assumptions. Because we recognized that our result was dependent on the different assumptions we made, we both acknowledged this and provided a set of analyses that illustrated the sensitivity of our result to alternative assumptions. Had we developed alternative estimates using additional data or combinations of alternatives as Labor suggested, those estimates would have been dependent on limitations inherent in these additional sources of data and any further assumptions about the beneficiary population. In any event, the alternative replacement rate estimates suggested by Labor may or may not reflect FECA beneficiaries’ actual replacement rates. For example, regarding marital status, we assumed that all beneficiaries who received the augmented dependent benefit had a spouse because the automated database did not distinguish between beneficiaries who were married or unmarried. Although the presence of spousal income would influence replacement rates, income of other dependents generally would not. Because an unknown number of beneficiaries may not have had a spouse, but rather a dependent such as a child or parent, we chose not to estimate the amount of income that may be associated with an unknown number of spouses. Recognizing that this would tend to understate our replacement rate calculations, we supplemented our primary analysis with examples of how changes in assumptions on spousal income would affect our replacement rate calculations, but we did not intend that the examples be applied universally to all cases. Labor also provided several other suggestions for expanding our analysis. These suggestions and our detailed responses to them are contained in appendix IV. As agreed with your office, unless you announce the contents of this report earlier, we plan no further distribution of this report until 10 days after its issue date. At that time we will send copies of this report to the Chairmen and Ranking Minority Members of the House Committee on Education and the Workforce and its Subcommittee on Workforce Protections; the House Committee on Government Reform and Oversight and its Subcommittee on Government Management, Information, and Technology; the Senate Committee on Governmental Affairs and its Subcommittee on International Security, Proliferation and Federal Services; other interested congressional committees and members; the Secretaries of Labor, Transportation, and VA; the Postmaster General of the United States; and the Directors of the Office of Management and Budget and OPM. Copies will be made available to others on request. Major contributors to this report are listed in appendix V. Please contact me at (202) 512-8676 if you or your staff have any questions concerning this report. To estimate the percentages of take-home pay replaced by FECA benefits, we first identified, for the “chargeback year” ending in June 1997, beneficiaries on the long-term rolls who received unreduced wage-loss compensation benefits and the dollar amounts of their benefits. Because collecting this information from beneficiaries’ case files maintained in OWCP’s district offices would have been time consuming and expensive, we used OWCP’s automated claims management and compensation payment systems to obtain this information. We did not independently verify the data obtained from these automated systems. We then estimated beneficiaries’ take-home pay by calculating the current value of their pay at the time of injury and deducting amounts for retirement benefit contributions and federal and state income taxes. Various workers’ compensation organizations define take-home pay as the difference between an employee’s estimated gross wages less deductions for the employee’s share of mandatory retirement contributions; federal income taxes; and, if applicable, state income taxes. We did not take into consideration discretionary deductions that employees could take for items such as thrift savings plan contributions, health and life insurance, and savings bonds because they are not commonly taken into account in workers’ compensation take-home pay calculations. For our calculations, we made assumptions about beneficiaries’ federal retirement plan participation, marital status, numbers of dependents, amounts of deductions to determine taxable income, and spouses’ incomes. Finally, we estimated beneficiaries’ replacement rates by dividing their FECA benefits by their take-home pay. Of the approximately 78,000 beneficiaries who received compensation benefits for the year ending in June 1997, 51,265 were on OWCP’s long-term rolls. OWCP had placed most of these 51,265 beneficiaries into one of the following three wage-earning capacity (WEC) categories based on the extent of their disability. No WEC. In general, totally disabled beneficiaries who have little or no reemployment potential. These beneficiaries receive unreduced workers’ compensation benefits. WEC undetermined. Beneficiaries with temporary total disabilities who also receive unreduced workers’ compensation benefits. Labor’s procedures call for it to review the status of these cases once a year. WEC established. Beneficiaries who received reduced compensation benefits because they were partially disabled and either were working or had the ability to work. Compensation benefits are determined by a formula that takes actual or potential earnings into consideration. We obtained information on 30,057 beneficiaries on the long-term rolls who either did not have a WEC or had an undetermined WEC and whose last two benefit payment checks for the 1997 chargeback year were for the same amount. We selected cases in which the last two checks were the same to eliminate cases in which beneficiaries received either lump-sum payments or payments for only a portion of the 4-week period normally covered by a payment. For our analyses, we excluded FECA beneficiaries who (1) were expected to receive benefits for relatively short periods before returning to work; (2) received schedule awards; (3) had established WECs; (4) lived overseas; or (5) received FECA benefits that were less than the minimum authorized under FECA because they were part-time or nonfederal employees (e.g., Civil Air Patrol). In many cases, the calculation of take-home pay replacement rates required the computation of individual states’ income taxes. To limit the number of states for which we needed to make these calculations, we limited our replacement rate analyses to about 75 percent of the 30,057 beneficiaries selected. We chose states where the largest number of these beneficiaries resided as of June 1997, until we had selected enough states—19—to include about 75 percent of beneficiaries. We then developed replacement rate information for 23,257 beneficiaries (77 percent), who resided in 19 states, 4 of which did not have an income tax. Beneficiaries’ actual pay at the time of injury could not be efficiently determined because this information is only available from beneficiaries’ case files located in OWCP’s district offices. We therefore made several calculations to estimate the current value of beneficiaries’ take-home pay. First, we recomputed beneficiaries’ workers’ compensation benefits to reflect benefits received at the time of injury by reducing their June 1997 benefits by the amount of periodic cost-of-living allowances they received. Second, based on this recomputation of benefits received at the time of injury and whether they had at least one dependent as of June 1997, we calculated employees’ pay before their injury based on either the 66-2/3 or 75 percent benefit level. Third, by increasing employees’ pay at the time of injury by average federal pay comparability increases authorized since then, we calculated the current value of beneficiaries’ pay at the time of injury. Fourth, from this amount, we made deductions for retirement benefit contributions; federal income taxes; and, where applicable, state income taxes in computing the current value of beneficiaries’ take-home pay. Lastly, we compared current FECA benefits received to these take-home pay amounts to determine take-home pay replacement rates. Postal Service, blue-collar, and certain other federal employees are in pay plans that differ from the general schedule plan that covers most federal civilian workers. In computing the current value of workers’ pay before the injury, we used OPM information on average federal pay comparability increases applicable to the general schedule because OWCP’s automated databases did not contain sufficient information to identify either the occupations of over two-thirds of the beneficiaries we analyzed or the pay plans of beneficiaries. From OWCP and OPM information, we determined FECA benefit levels, the presence or absence of dependents, whether the beneficiary resided in a state with an income tax, and beneficiaries’ estimated pay at the time of injury. However, to develop our estimates of take-home pay replacement rates, we also needed to make assumptions regarding beneficiaries’ retirement and Medicare contributions, numbers of exemptions, amounts of itemized deductions (if taken), and spousal income. Changing the assumptions would change the estimated ratio of FECA benefits to take-home pay. The assumptions we made in calculating take-home pay replacement rates for our principal analyses follow. While information was not readily available to support different assumptions, we used different assumptions about numbers of exemptions, income tax deductions, and spousal income to illustrate how they could influence take-home pay replacement rates. Retirement and Medicare contributions. We assumed that all beneficiaries participated in CSRS and Medicare and that total deductions for these programs were 8.45 percent. Our profile information indicated that a high percentage of FECA beneficiaries on the long-term rolls were over 55 years old or were injured many years ago. Because the Federal Employees Retirement System (FERS) was not established until 1986, we assumed that most beneficiaries would have been CSRS participants. Under both CSRS and FERS, deductions for retirement and Medicare benefits totaled 8.45 percent. Under CSRS, deductions in 1997 were 7 percent for retirement benefits and 1.45 percent for Medicare benefits. Under FERS, deductions were 6.2 percent for Social Security retirement benefits, 0.8 percent for a FERS annuity, and 1.45 percent for Medicare benefits. However, under FERS, the 6.2 percent contribution for Social Security retirement benefits applied to only the first $65,400 of pay in 1997. Thus, take-home pay under our assumptions would be understated for the relatively small number of FECA beneficiaries whose 1997 pay was over $65,400 and who were in FERS. In these cases, replacement rates would be lower. Exemptions (dependents). In those cases in which FECA beneficiaries received augmented FECA benefits of 8-1/3 percent, the database did not indicate the exact number of dependents because the benefit is the same whether the beneficiary had one or more dependents. We assumed that such beneficiaries had only one dependent and that the dependent was a spouse. We made this assumption based on the average age of the beneficiaries analyzed and to simplify our tax and take-home pay calculations. In cases where there is more than one dependent, and therefore more exemptions for tax purposes, take-home pay would be higher and the replacement rate would be lower. Appendix II, table II.6 shows the effects of different exemption assumptions on take-home pay replacement rates. Because few beneficiaries were injured and added to the long-term rolls after they were 65 years old, we did not consider whether additional exemptions for age or blindness may have applied in computing take-home pay. Itemized deductions. In our computations of federal and state income taxes, we used federal and state standard deduction amounts for both single and married beneficiaries except in cases where state taxes could have exceeded federal standard deduction amounts. In these cases, our computations of federal income taxes used itemized deductions based on state income tax amounts rather than standard deduction amounts. To support our use of the standard deduction for computing income taxes, we used 1995 Internal Revenue Service (IRS) information on tax filers who itemized deductions to show that lower income tax filers generally did not itemize deductions. Appendix II, table II.4 shows how the use of different itemized deduction amounts in computing federal income taxes would reduce take-home pay replacement rates. Appendix II, figure II.5 shows the range of take-home pay replacement rates by amount of pay for single and married beneficiaries claiming different deduction amounts. Spousal income. We assumed that if a FECA beneficiary received the dependent benefit, the dependent was a spouse. For our principal analyses, we assumed the spouse had no income. If spouses did have income, the beneficiaries’ effective take-home pay replacement rates would have been higher. Examples of the effect of spousal income on take-home pay replacement rates are shown in appendix II, figure II.4. In estimating federal income taxes for our principal analyses, we generally computed taxable income by deducting amounts for federal standard deductions (i.e., $4,150 for a single individual or $6,900 for a couple filing a joint return) and exemptions (i.e., $2,650 for each exemption) from beneficiaries’ gross pay adjusted to 1997 levels and applied 1997 federal income tax rates. Because over 25 percent of the FECA beneficiaries analyzed were single and because the average age of all beneficiaries analyzed was 61, we did not consider the effects of earned income tax credits in computing federal income taxes. If we had considered this credit for eligible FECA beneficiaries, effective take-home pay replacement rates would have been lower. In computing take-home pay for FECA beneficiaries who resided in states with an income tax, we took into account amounts the states allowed for standard deductions, spousal exemptions, and, where appropriate, other deductions or tax credits that were based on gross income in computing state income taxes. We obtained information on 1997 state income tax rates, exemptions, and standard deductions from the Research Institute of America’s All States Tax Handbook and individual state’s income tax forms and instructions. The residents of some states could be subject to county or city income taxes. However, we did not attempt to identify and take these types of taxes into consideration in computing FECA beneficiaries’ take-home pay because it would have been time consuming and expensive to do so. If applicable, deductions for these taxes from pay would serve to increase take-home pay replacement rates. Our comparison of FECA benefits with the current value of take-home pay did not take into consideration beneficiaries’ projected salary growth that might have resulted from merit pay increases or promotions had they not been injured. Assumptions about beneficiaries’ potential promotions would have been very speculative. Also, other studies we reviewed in developing our replacement rate methodology did not consider future promotion potential to be a factor in calculating replacement rates. To obtain information on the career patterns of workers in selected occupations that were the same as the occupations of FECA beneficiaries, we first used occupational code data from OWCP’s automated systems to identify the most frequently coded occupations of FECA beneficiaries. Usable information on beneficiaries’ occupations was available for only 9,900 of the 30,057 workers we analyzed. According to an OWCP analyst, Labor has required agencies to furnish occupational code information for injured workers since October 1986. However, many of the cases that we analyzed were established before then. For the 9,900 FECA beneficiaries for which occupational code information existed, over 550 different occupations were represented. As agreed with your office, we developed career pattern information for workers in the following occupations—letter carrier, postal distribution, nurses, and air traffic controllers. We selected these occupations because they were either the occupations (1) that were coded the most frequently or (2) for which many beneficiaries were likely to be employed by the same agency. We interviewed officials from the Postal Service, FAA, and VA who were familiar with the career patterns of employees in these occupations. We supplemented and compared this information with readily available personnel data on active employees obtained from either these agencies or OPM. In addition, for workers in other frequently cited occupations, we compared aggregate age and pay information from OPM’s Central Personnel Data File with FECA information on beneficiaries with the same occupations. Due to time constraints, it was beyond the scope of our review to analyze the many factors that could be involved in determining the extent to which beneficiaries’ career progression was affected by their injuries. To determine beneficiaries’ FECA benefit amounts, current ages, ages when injured, and other characteristics, we relied on data from OWCP’s automated claims management and compensation payment system. We developed information on beneficiaries’ characteristics for 30,057 beneficiaries—nearly 23,250 beneficiaries for whom we developed replacement rate information and approximately 6,800 of the remaining 11,460 beneficiaries on the long-term rolls who were receiving FECA wage-loss compensation benefits of either 66-2/3 or 75 percent of gross pay. We did not verify the information on beneficiaries’ characteristics obtained from OWCP’s automated systems. According to OWCP officials, they generally believed the information from these systems to be highly reliable when used in the aggregate. For purposes of our analyses, we used the date of injury for computing FECA benefits and pay at the time of injury. An OWCP analyst told us that information on effective dates of some beneficiaries’ pay rates may not always be available or accurate because beneficiaries may have (1) been on and off the rolls over a period of years or (2) suffered from occupational diseases rather than traumatic injuries. Our work was done in Washington, D.C., between October 1997 and July 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Labor. Labor’s comments are summarized at the end of the letter and are presented in full in appendix IV. The following sections discuss in more detail the factors and assumptions that influenced the estimated replacement rates presented in the letter. More recently injured beneficiaries generally had lower replacement rates, on average, than those who were injured many years ago. Over the years, FECA benefits were increased by cost-of-living allowances that exceeded general schedule pay comparability increases that beneficiaries would have received had they not been injured. Table II.1 shows average replacement rates based on year of injury and the number of beneficiaries injured during each period. Since 1966, FECA cost-of-living and general schedule pay comparability increases have generally differed in amounts and effective dates. Amounts of cost-of-living or pay comparability increases to which beneficiaries would have been entitled depended on their dates of injury. Figure II.1 compares FECA cost-of-living increases with pay comparability increases for each year from 1966 to 1997. Table II.2 shows the cumulative amount of cost-of-living and pay comparability increases that beneficiaries injured before selected dates would have received through June 1997. Increases between January 1970 and June 1997 Sum of FECA cost-of-living increases (percent) Sum of average pay comparability increases if not injured (percent) Cost-of-living increases are provided to injured employees who stopped work on account of an injury more than 1 year prior to the effective date of the increase. Replacement rates vary for workers receiving different amounts of pay. Because federal and many state tax rates are progressive, higher pay levels generally mean higher taxes. Higher tax rates reduce take-home pay, thereby increasing replacement rates. Conversely, in states with no state income taxes, replacement rates for beneficiaries with the same income would be lower than they would be in states with an income tax. In 1997, for single individuals, federal income tax rates were 15 percent on taxable income up to $24,650, 28 percent on taxable income up to $59,750, and 31 percent on taxable income up to $124,650. For married individuals filing jointly, federal income tax rates were 15 percent of taxable income up to $41,200, and 28 percent of taxable income up to $99,600. Table II.3 shows that average replacement rates generally increased as beneficiaries’ pay increased. Higher pay would generally be subject to higher income tax rates, which cause an increase in replacement rates. In addition to changes in take-home pay replacement rates related to progressive federal income tax rates, many FECA beneficiaries lived in states that also taxed income. Beneficiaries living in states with income taxes would have less take-home pay and thus higher replacement rates. Of the 23,257 beneficiaries for whom we developed replacement rate information, about 17,200 lived in 15 states with a state income tax. Of these 15 states, 3 had flat tax rates ranging from 2.8 to 5.95 percent of income, and 12 had progressive tax rates ranging from 0.5 to 9.3 percent of income. In addition, one state had an income tax but did not tax earnings from salaries or wages. In computing state income taxes, we considered exemption and standard deduction amounts allowed by the states in making our calculations. Our estimate of the average take-home pay replacement rate for all beneficiaries for whom we developed information was about 95 percent; for beneficiaries in states without an income tax, about 94 percent; and for beneficiaries in states with an income tax about, 96 percent. Figure II.2 shows how different state income tax rates would influence replacement rates for beneficiaries earning various amounts of pay. Some beneficiaries lived in areas, such as counties and cities, that also taxed income. For our analysis, however, we did not identify or consider amounts of income taxes paid to local jurisdictions. Had we included these taxes, they would have further reduced beneficiaries’ take-home pay and increased replacement rates. allowance, married injured workers whose spouses did not work would have lower take-home pay replacement rates than those who were single because their standard deduction and exemption amounts would be higher than single beneficiaries and thus their taxes would be lower. Figure II.3 shows replacement rates for (1) single beneficiaries and beneficiaries with dependents based on their respective benefit levels and various pay amounts received and (2) beneficiaries with dependents if additional FECA benefits of 8-1/3 percent were not provided. rates would have decreased if we had assumed that the number of exemptions or amounts of itemized deductions claimed for income tax purposes were greater than the amounts we used in our calculations. Each of these factors and the extent of their effects are discussed in more detail in the following subsections. The presence of a spouse with income could raise the value of nontaxable workers’ compensation benefits because the couple’s combined taxable income had there not been an injury might be subject to a higher tax rate. Higher tax rates equate to higher wage replacement rates. Pay earned by married workers when they returned to work after they had been injured would not be accompanied by additional exemptions or, in most cases, deductions for the couple. However, additional taxable wages based on both incomes could be subject to the same or higher tax rates than the last dollars earned by the injured worker’s spouse. Compared with single-income couples, replacement rates for two-income couples are typically higher at both lower and higher incomes, according to a Workers Compensation Research Institute study. Figure II.4 shows for beneficiaries receiving different amounts of benefits that the more taxable income a beneficiary’s spouse had, the higher the replacement rate. Standard deductions for 1997 federal income tax purposes for single and joint return filers were $4,150 and $6,900, respectively. Using these deduction amounts and our other assumptions, percentages of pay at the time of injury adjusted to 1997 pay levels replaced by FECA benefits for single beneficiaries and beneficiaries with dependents were 92 and 97 percent, respectively. If itemized deductions were two or three times higher than the standard deduction amounts we used, replacement rates would decrease by amounts ranging from about 2 to 7 percent depending on beneficiaries’ pay. Table II.4 shows examples of changes in replacement rates for single and married beneficiaries with pay of $30,000 or $60,000 if their itemized deductions were double or triple the 1997 standard deduction amounts. Apppendix II Additional Information on Factors and Assumptions Affecting Replacement Rates Exemption(s) As shown, replacement rates were highest for married beneficiaries at the higher income level who claimed standard deductions and lowest for single beneficiaries at the lower income level whose itemized deductions were three times the standard deduction amount. Figure II.5 shows these differences across different income levels. Other single or married beneficiaries whose itemized deductions were two or three times the standard deduction amounts would have replacement rates that would fall between these rates. The number of FECA beneficiaries who would itemize their deductions versus those who would use the standard deduction is unknown. According to IRS data on 1995 income tax filers with adjusted gross incomes between $10,000 and $99,999, of the 50.9 million single taxpayers, 42.7 million (84 percent) did not itemize deductions. Of the 49.0 million taxpayers filing jointly, 25.5 million (52 percent) did not itemize deductions. IRS information shows that as income increases, the percentage of taxpayers itemizing deductions increases. While average amounts of itemized deductions increased with income, these increases were relatively small. Table II.5 shows, for various income groups, the percentage of returns claiming itemized deductions and the average amounts of deductions claimed. In 1995, standard deduction amounts for single and married beneficiaries were $3,900 and $6,550, respectively. Average amount of itemized deductions Data provided for filers with incomes over $10,000 or under $100,000 because the pay for most FECA beneficiaries would be within this range. In general, if FECA beneficiaries were similar to all individuals filing income tax returns in 1995, FECA beneficiaries with more pay at the time of injury would be more likely to claim itemized deductions in excess of standard deduction amounts than would those with lower pay. In such cases, our replacement rates would be overstated, particularly for beneficiaries at higher income levels. Likewise, if beneficiaries’ deductions were equal to or greater than their income (thereby owing no tax), the replacement rate for single and married beneficiaries would be about 73 and 82 percent, respectively, because the relationship between take-home pay (gross pay less retirement and Medicaid contributions) and FECA benefits would always be the same. FECA beneficiaries are entitled to augmented benefits if they have one or more dependents. However, information on the specific number of dependents claimed by each beneficiary is not available from FECA automated data. Beneficiaries receiving the dependent benefit allowance who were eligible to claim more than the two we assumed in our income tax calculations would have lower take-home pay replacement rates than those shown in our analyses. of how increases in the number of exemptions would decrease replacement rates. In general, each additional exemption decreased the replacement rate by about 1.5 percent. In addition to the above factors, workers’ actual take-home pay could be affected by other deductions that we did not consider in our calculations of FECA take-home pay replacement rates because employees have a choice of whether to have their take-home pay reduced by their share of the cost of fringe and other benefits to which they may be entitled. Examples of the deductions not included in our calculations of take-home pay were employees’ thrift savings plan contributions, allotments for U.S. savings bonds, and deductions for health, life, or disability insurance. Typically, these deductions are discretionary. In the case of health and life insurance, injured workers are eligible to participate in these federal programs and could have payments for these types of insurance withheld from their workers’ compensation benefits. FECA information (June 1997) OPM information (September 1997) The following are comments on the Department of Labor’s letter dated July 20, 1998. 1. Labor said we made an underlying but unstated assumption that the state where beneficiaries currently resided was the state where they lived when injured. Labor added that about 26 percent of the beneficiaries did not live in the state in which they were injured, but the report did not state how many individuals or what income groups moved from states with an income tax to states without an income tax. Regarding Labor’s comment that we made an unstated assumption about beneficiaries residing in states where they lived when they were injured, we did not need to make such an assumption. Although our profile information showed that about 26 percent of the beneficiaries currently lived in states that were different from the ones in which they were injured, any differences between beneficiaries’ states of residence at the time of injury and their current residences were not relevant to our computation of beneficiaries’ current take-home pay replacement rates. 2. Labor suggested that our estimates of take-home pay replaced by FECA benefits be further qualified by adding language stating that we assumed beneficiaries had not received any promotions from the time of injury through the present. Labor said that it was almost certain that some percentage of injured workers would have received promotions, thus lowering the replacement rate. Labor is in effect saying that for at least some workers the take-home pay replacement rates we developed were overstated because our estimated replacement rates were based on pay at the time of injury adjusted to 1997 pay levels and did not take into consideration the possibility that some workers, had they not been injured, would have received promotions. Higher pay rates reflecting assumed promotions, if compared to compensation benefits based on pay at the time of injury, would result in lower replacement rates. While the subject of forgone promotions may be relevant to assessing the effects of work-related injuries on individuals’ careers, neither we nor the workers’ compensation studies we reviewed in developing our replacement rate methodology considered future promotion potential to be a factor in calculating replacement rates. In addition, although some employees may have been promoted had they not been injured, an assumption by us on which employees would have received one or more promotions would be very speculative. Therefore, we did not consider it necessary to further qualify our estimates of take-home pay replaced by FECA benefits. We have revised our scope and methodology to note the reasons why we did not make an assumption regarding forgone promotions and merit pay increases. 3. In commenting on table II.4, Labor said that our estimated replacement rate of 95.9 percent for a married beneficiary who was paid $60,000 and who took the standard deduction would not be reflective of the norm for that group because higher income individuals tend to itemize deductions. Likewise, Labor noted that our replacement rate of 79.3 percent based on a single person who was paid $20,000 and whose itemized deductions were three times the standard deduction amount would not be reflective of the norm for that group because most single people with pay of $20,000 would not be itemizing deductions. We did not intend the information in table II.4 to be reflective of norms for those groups of individuals. Rather, we provided these hypothetical examples to show the sensitivity of our replacement rate analyses to different assumptions about individual beneficiaries’ standard or itemized deductions. Larry H. Endy, Assistant Director Edward R. Tasca, Evaluator-in-Charge Gregory H. Wilmoth, Supervisory Social Science Analyst George H. Quinn, Jr., Computer Specialist In addition to those named above, the following individuals from the General Government Division made important contributions to this report: Wayne Barrett, Senior Evaluator; Cathy Hurley, Senior Computer Specialist; Kim Wheeler, Graphics; and Ernestine Burt, Issue Area Assistant. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on workers' compensation benefits for lost wages provided to workers with job-related injuries under the Federal Employees' Compensation Act (FECA), focusing on: (1) the percentages of take-home pay that FECA benefits replaced for beneficiaries on the long-term rolls who were receiving full benefits; (2) career patterns of workers in selected occupations that were the same as the occupations of FECA beneficiaries; and (3) beneficiaries' characteristics such as current age, age when injured, compensation benefits paid in 1997, and pay at the time of injury adjusted to 1997 pay levels. GAO noted that: (1) for the more than 23,250 beneficiaries on the long-term rolls for whom GAO developed replacement rates, GAO estimated that FECA benefits replaced, on average, over 95 percent of the take-home pay beneficiaries would have received had they not been injured; (2) estimated replacement rates ranged between about 76 and 136 percent; (3) compensation benefits equaled between an estimated 80 and 99 percent of take-home pay for about 70 percent of these beneficiaries and amounted to 100 percent or more in 29 percent of the cases; (4) under assumptions GAO needed to make to compute beneficiaries' income taxes and retirement contributions, replacement rates tended to be higher for beneficiaries who: (a) received higher amounts of pay before their injury; (b) were injured before 1980; (c) received the FECA dependent benefit; and (d) lived in states with an income tax; (5) using different assumptions to show their effect on replacement rates, beneficiaries with more exemptions or deductions for income tax purposes would have had lower replacement rates because these rates generally decrease as taxable income decreases; (6) beneficiaries with a spouse who had taxable income would have higher replacement rates because replacement rates generally increase as spousal income increases; (7) single and married beneficiaries who had no income subject to income taxes while working--generally those with low incomes--would have replacement rates of about 73 and 82 percent, respectively; (8) GAO's analyses showed that about 70 percent of all beneficiaries were over 40 years old when they were injured, and the average adjusted pay of beneficiaries in the selected occupations approximated the average pay of active workers in the same occupations; (9) GAO was unable to determine the extent to which beneficiaries' career prospects were diminished by their on-the-job injuries because GAO's analyses were limited to readily available data; (10) the career patterns of individuals depended on a multitude of personal employment factors as well as the specific jobs in which individuals are employed, according to agency officials familiar with career patterns of workers; (11) about 65 percent of the 30,000 beneficiaries identified by GAO were over 55 years old, and the average age of beneficiaries was 61, as of June 1997; and (12) in June 1997, their annual compensation averaged $26,220, and their average gross pay at the time of injury adjusted to 1997 pay levels was $34,833.
The use of information technology (IT) to electronically collect, store, retrieve, and transfer clinical, administrative, and financial health information has great potential to help improve the quality and efficiency of health care and is important to improving the performance of the U.S. health care system. Historically, patient health information has been scattered across paper records kept by many different caregivers in many different locations, making it difficult for a clinician to access all of a patient’s health information at the time of care. Lacking access to these critical data, a clinician may be challenged to make the most informed decisions on treatment options, potentially putting the patient’s health at greater risk. The use of electronic health records can help provide this access and improve clinical decisions. As we have previously noted, electronic health records are particularly crucial for optimizing the health care provided to military personnel and veterans. While in military status and later as veterans, many DOD and VA patients tend to be highly mobile and have health records residing at multiple medical facilities within and outside the United States. Making such records electronic can help ensure that complete health care information is available for most military service members and veterans at the time and place of care, no matter where it originates. Key to making health care information electronically available is interoperability—that is, the ability to share data among health care providers. Interoperability enables different information systems or components to exchange information and to use the information that has been exchanged. This capability is important because it allows patients’ electronic health information to move with them from provider to provider, regardless of where the information originated. If electronic health records conform to interoperability standards, they can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the necessary information required for optimal care. Paper- based health records—if available—also provide necessary information, but unlike electronic health records, do not provide decision support capabilities, such as automatic alerts about a particular patient’s health, or other advantages of automation. Interoperability depends on the use of agreed-upon standards to ensure that information can be shared and used. In the health IT field, standards may govern areas ranging from technical issues, such as file types and interchange systems, to content issues, such as medical terminology. DOD and VA have agreed upon numerous common standards that allow them to share health data. They have also participated in numerous standards- setting organizations tasked to reach consensus on the definition and use of standards. For example, DOD and VA officials serve as members and are actively working on several committees and groups within the Healthcare Information Technology Standards Panel. The panel identifies and harmonizes competing standards and develops interoperability specifications that are needed for implementing the standards. Interoperability can be achieved at different levels. At the highest level, electronic data are computable (that is, in a format that a computer can understand and act on to, for example, provide alerts to clinicians on drug allergies). At a lower level, electronic data are structured and viewable, but not computable. The value of data at this level is that they are structured so that data of interest to users are easier to find. At still a lower level, electronic data are unstructured and viewable, but not computable. With unstructured electronic data, a user would have to find needed or relevant information by searching uncategorized data. Beyond these, paper records also can be considered interoperable (at the lowest level) because they allow data to be shared, read, and interpreted by human beings. According to DOD and VA officials, not all data require the same level of interoperability, nor is interoperability at the highest level achievable in all cases. For example, unstructured, viewable data may be sufficient for such narrative information as clinical notes. Figure 1 shows the distinction between the various levels of interoperability and examples of the types of data that can be shared at each level. DOD and VA have been working to exchange patient health information electronically since 1998. We have previously noted their efforts on three key projects: The Federal Health Information Exchange (FHIE), begun in 2001 and enhanced through its completion in 2004, enables DOD to electronically transfer service members’ electronic health information to VA when the members leave active duty. The Bidirectional Health Information Exchange (BHIE), established in 2004, was aimed at allowing clinicians at both departments viewable access to records on shared patients—that is, those who receive care from both departments. For example, veterans may receive outpatient care from VA clinicians and be hospitalized at a military treatment facility. The interface also allows DOD sites to see previously inaccessible data at other DOD sites. The Clinical Data Repository/Health Data Repository (CHDR) interface, implemented in September 2006, linked the departments’ separate repositories of standardized data to enable a two-way exchange of computable health information. These repositories are a part of the modernized health information systems that the departments have been developing—DOD’s AHLTA and VA’s HealtheVet. In their ongoing initiatives to share information, VA uses its integrated medical information system—the Veterans Health Information Systems and Technology Architecture (VistA)—which was developed in-house by VA clinicians and IT personnel. All VA medical facilities have access to all VistA information. DOD currently relies on its AHLTA, which is comprised of multiple legacy medical information systems that the department developed from commercial software products that were customized for specific uses. For example, CHCS, which was formerly DOD’s primary health information system, is still in use to capture pharmacy, radiology, and laboratory order management. In addition, the department uses Essentris (also called the Clinical Information System), a commercial health information system customized to support inpatient treatment at military medical facilities. Not all of DOD’s medical facilities yet have this inpatient medical system. To facilitate compliance with the act, the Interagency Clinical Informatics Board, made up of senior clinical leaders from both departments who represent the user community, began establishing priorities for interoperable health data between DOD and VA. In this regard, the board is responsible for determining clinical priorities for electronic data sharing between the departments, as well as what data should be viewable and what data should be computable. Based on its work, the board established six interoperability objectives for meeting the departments’ data sharing needs. According to the former acting director of the interagency program office, DOD and VA consider achievement of these six objectives, in conjunction with capabilities previously achieved (e.g., FHIE, BHIE, CHDR), to be sufficient to satisfy the requirement for full interoperability by September 2009. The six objectives are listed in table 1. Our prior reports on DOD’s and VA’s efforts to develop fully interoperable electronic health records noted their progress and highlighted issues that they needed to address to achieve electronic health record interoperability. Specifically, our July 2008 report noted that the departments were sharing some, but not all, electronic health information at different levels of interoperability. At that time the departments’ efforts to set up the interagency program office were in the early stages. Leadership positions in the office were not permanently filled, staffing was not complete, and facilities to house the office had not been designated. Accordingly, we recommended that the Secretaries of Defense and Veterans Affairs expedite efforts to put in place permanent leadership, staff, and facilities for the program office. The departments agreed with our recommendations and stated that they would take actions to address them. Our January 2009 report noted that the departments had defined plans to further increase their sharing of electronic health information; however, the plans did not contain results-oriented (i.e., objective, quantifiable, and measurable) performance goals and measures that could be used as a basis to track and assess progress. We recommended the departments develop and document such goals and performance measures for the six interoperability objectives, to use as the basis for future assessments and reporting of interoperability progress. DOD and VA agreed with our recommendation and stated that the departments intended to include results-oriented goals in their future plans. DOD and VA continue to take steps toward achieving full interoperability in compliance with applicable standards by September 30, 2009. In this regard, the departments have achieved planned capabilities for three of the interoperability objectives—refine social history data, share physical exam data, and demonstrate initial network gateway operation. The following information further explains DOD’s and VA’s activities with respect to these three objectives. Refine social history data: The departments established this objective to enable DOD to share social history data captured in its electronic health record with VA. These data describe, for example, patients’ involvement in hazardous activities and tobacco and alcohol use. Our review of DOD and VA project documentation confirmed that the departments have achieved sharing of viewable social history data, thus providing VA with additional clinical information on shared patients that clinicians could not previously view. Share physical exam data: The departments established this objective to implement an initial capability for DOD to share with VA the electronic health record information that supports the physical exam process when a service member separates from active military duty. To this end, the departments achieved the capability for VA to view DOD’s medical exam data through the BHIE interface, allowing VA to view outpatient treatment records, pre- and postdeployment health assessments, and postdeployment health reassessments, which are compiled for the DOD physical exam. Demonstrate initial network gateway operation: DOD and VA want to demonstrate the operation of secure network gateways to support health information sharing between the departments. These gateways are to support health record data exchange, thus facilitating future growth in data sharing. As of early July 2009, the departments reported that five network gateways were operational and that data migration to two of the operational gateways had begun. The departments believed these five gateways satisfy the intent of the objective and will provide sufficient capacity to support health information sharing between DOD and VA as of September 2009. The officials stated, however, that they anticipate needing up to four additional gateways to support future growth in information sharing between the departments at locations and dates that are to be determined. For the remaining three objectives, the departments have partially achieved planned capabilities, with additional work needed to fully meet the objectives. Regarding the objective to expand questionnaires and self- assessment tools, this additional work is intended to be completed by September 2009. With respect to the objectives to expand Essentris and demonstrate initial document scanning, department officials stated that they also intend to meet these objectives; however, additional work will be required beyond September to perform all the activities necessary to meet clinicians’ needs for health information. The following information further explains the departments’ activities with respect to these objectives. Expand questionnaires and self-assessment tools: The departments intend to provide all periodic health assessment data stored in the DOD electronic health record to VA in a format that associates questions with responses. Health assessment data are collected from two sources: questionnaires administered at military treatment facilities and a DOD health assessment reporting tool that enables patients to answer questions about their health upon entry into the military. Questions relate to a wide range of personal health information, such as dietary habits, physical exercise, and tobacco and alcohol use. Our review of the departments’ project documentation determined that they have established the capability for VA to view questions and answers from the questionnaires collected by DOD at military treatment facilities; however, they have not yet established the capability for VA to view information from DOD’s health assessment reporting tool. Department officials stated that they intend to establish this additional capability by September 2009. Expand Essentris in DOD: By September 30, 2009, DOD intends to expand Essentris to at least one additional site for each military service and to increase the percentage of inpatient discharge summaries that it shares electronically with VA to 63 percent. According to the acting director of the interagency program office, as of late June 2009, the departments had expanded the system to two Army sites (but not yet to an Air Force or Navy site) and were sharing 58 percent of inpatient discharge summaries. The acting director stated that the departments expect to meet their goal of sharing 63 percent of inpatient discharge summaries and expand the system to an Air Force and a Navy site by the September deadline. Nonetheless, the official stated that to better meet clinicians’ needs, DOD plans to further expand the inpatient medical records system. In this regard, the department has established a revised goal of making the inpatient system operational for 92 percent of DOD’s inpatient beds by September 2010. Demonstrate initial document scanning: The departments intend to demonstrate an initial capability to scan service members’ medical documents into the DOD electronic health record and share the documents electronically with VA by September 2009. According to the program office acting director, the departments were in the process of setting up an interagency test environment to test the initial capability to query medical documents associated with specific patients as of late June 2009. He stated that the departments expect to begin user testing at up to nine sites by September 2009. According to this official, these activities are expected to demonstrate initial document scanning capability. However, after September, the departments anticipate performing additional work to expand their initial document scanning capability (e.g., completion of user testing and deployment of the scanning capability at all DOD sites). The DOD/VA Interagency Program Office is not yet effectively positioned to serve as a single point of accountability for the implementation of fully interoperable electronic health record systems or capabilities. Since we last reported in January 2009, the departments have made progress in setting up the office by hiring additional staff, although they continue to fill key leadership positions on an interim basis. In addition, the office has begun to demonstrate responsibilities outlined in its charter, but is not yet fulfilling key IT management responsibilities in the areas of performance measurement, scheduling, and project planning. To address the requirements set forth in the act, the departments identified in the September 2008 DOD/VA Information Interoperability Plan a schedule and key activities for setting up the interagency program office. Since we last reported in January 2009, the departments have completed all but one of the activities identified in their schedule. For example, they have completed personnel descriptions for the office’s staff and have continued efforts to recruit and hire staff for both government and contractor positions. As of early July 2009, the departments had selected staff members for 10 of 14 government positions, an increase of 8 staff since our last report. The acting director of the office reported that recruitment efforts were underway to fill the remaining 4 positions by late September 2009. Further, all 16 contractor positions had been filled, an increase of 10 contractor staff since we last reported. Table 2 provides the status of selected key activities to establish the interagency program office. However, while the departments have taken action toward hiring a full- time permanent director and a deputy director to lead the office, these positions continue to be filled on an interim basis. As of early July, DOD had selected a candidate for the director position, VA had concurred with the selection, and the candidate’s application had been sent to the Office of Personnel Management for approval. In the meantime, the departments requested and received an extension of the current acting director’s appointment until September 30, 2009, or until a permanent official is hired. Further, as of late June 2009, interagency program officials stated that actions were underway to fill the deputy director position and that VA was interviewing candidates for this position. According to the acting director, the departments anticipate making a selection for the deputy director position by the end of July 2009. The January 2009 interagency program office charter describes, among other things, the mission and function of the office associated with attaining interoperable electronic data. The charter further identifies responsibilities of the office in carrying out its mission, in areas such as oversight and management, stakeholder communication, and decision making. The office has taken steps toward fulfilling certain responsibilities described in its charter. For example, the office submitted its first annual report to Congress that summarized the departments’ efforts toward achieving full interoperability and the status of key activities completed to set up the office. Further, the office developed 11 standard operating procedures in areas such as program management oversight, strategic communications, and process improvement. However, the office has yet to carry out other key responsibilities identified in its charter that are fundamental to effective IT program management and that would be essential to effectively serving as the single point of accountability. For example, the office has not yet established results-oriented (i.e., objective, quantifiable, and measurable) goals and performance measures for all six interoperability objectives—an action that we previously recommended that DOD and VA undertake. Using results-oriented metrics to measure progress is an important IT program management activity because they can serve as a basis to provide meaningful information on the status of a program. As noted earlier, DOD and VA agreed with our recommendation calling for the establishment of results-oriented performance goals and measures. Further, the program office charter identifies the development of metrics to monitor the departments’ performance against interoperability goals as a responsibility of the office. Nonetheless, the office has only developed such a goal for one interoperability objective—expand Essentris in DOD. It has not developed results-oriented goals and measures for the other five objectives, instead stating that such goals and measures will be included in the next version of the DOD/VA Joint Executive Council Joint Strategic Plan (known as the joint strategic plan), which the office expects to complete by December 2009. If the departments complete the development of results-oriented performance goals and measures for their interoperability objectives, they will be better positioned to gauge their progress toward achieving fully interoperable capabilities and improving veterans’ health care. Development of an integrated master schedule is also a key IT program management activity, especially given the complexity of the departments’ efforts to achieve full interoperability. According to DOD guidance, an integrated master schedule should identify detailed project tasks and the associated start, completion, and interim milestone dates; resource needs; and relationships (e.g., sequence and dependencies) between tasks. While the program office has begun to develop an integrated master schedule as required by its charter, the current version does not include the attributes of an effective schedule. For example, the schedule included limited information for three of the six interoperability objectives (i.e., refine social history data, share physical exam data, and expand questionnaires and self-assessment tools). Specifically, the schedule included the name of each objective and a completion date of September 30, 2009. However, the schedule contained no information on tasks to be performed to meet the objectives. Further, the schedule did not reflect start dates, resource needs, or relationships between tasks for any of the six interoperability objectives. Without a complete and detailed integrated master schedule, the departments are missing another key activity that could be useful in determining their progress towards achieving full interoperability. Similarly, development of a project plan is an important activity for IT program management. Industry best practices and IT project management principles stress the importance of sound planning for any project. Inherent in such planning is the development and use of a project management plan that describes, among other factors, the project’s scope, resources, and key milestones. The interagency program office charter identifies the need to develop a project plan, but, as of late June 2009, the office had not yet done so. Without a project plan, the departments lack a key tool that could be used to guide their efforts in achieving full interoperability. In discussing these activities, the program office’s acting director and former acting director cited three reasons for why performance measurement, scheduling, and project planning responsibilities had not been accomplished. First, they stated that because it has taken longer than anticipated to hire staff, the office has not been able to perform all of its responsibilities. Second, the office’s interim leadership and staff have focused their efforts on providing to interested parties (e.g., federal agencies and military organizations) briefings, presentations, and status information on activities the office is undertaking to achieve interoperability, in addition to participating in efforts to develop a strategy for implementation of the Virtual Lifetime Electronic Record, which the President announced in April 2009. Finally, according to the officials, the office waited until June 2009 to begin the process of developing metrics so that they could do so in conjunction with the departments’ annual update to the joint strategic plan that is scheduled for completion in late 2009. However, without metrics to monitor progress, a complete integrated master schedule, and a project plan, the interagency program office’s ability to effectively provide oversight and management, including meaningful progress reporting on the delivery of interoperable capabilities, is jeopardized. Moreover, in the absence of these critical activities, the office is not effectively positioned to function as the single point of accountability for achieving full interoperability. DOD and VA have continued to increase electronic health information interoperability. In particular, the departments have taken steps to meet their six interoperability objectives by September 30, 2009. However, for two of the six interoperability objectives, the departments subsequently plan to perform significant additional activities that are necessary to meet clinicians’ needs. Further, the departments’ lack of progress in establishing fundamental IT management capabilities that are specific responsibilities of the interagency program office contributes to uncertainty about the extent to which the departments will progress toward achievement of full interoperability by the deadline. While the departments have generally made progress toward making the program office operational, the office has not yet completed a project plan or a detailed integrated master schedule. Without these important tools, the office is limited in its ability to effectively manage and provide meaningful progress reporting on the delivery of interoperable capabilities that are intended to improve the quality of health care provided to our nation’s veterans. To better improve management of DOD’s and VA’s efforts to achieve fully interoperable electronic health record systems, including satisfaction of the departments’ interoperability objectives, we recommend that the Secretaries of Defense and Veterans Affairs direct the Director of the DOD/VA Interagency Program Office to establish a project plan and a complete and detailed integrated master schedule. In written comments on a draft of this report, the DOD official who is performing the duties of the Assistant Secretary of Defense (Health Affairs) and the Acting Director of the DOD/VA Interagency Program Office concurred with our findings and recommendation. The VA Chief of Staff also provided written comments, in which the department concurred with our recommendation. In this regard, DOD and VA stated that they will provide the necessary information for the DOD/VA Interagency Program Office to establish a project plan and to complete a detailed integrated master schedule. If the recommendation is properly implemented, it should better position DOD and VA to effectively measure and report progress in achieving full interoperability. Beyond its concurrence with the recommendation, the VA Chief of Staff stated that the department disagreed with the report’s characterization of the six interoperability objectives and expressed concern about the report projecting that the objective to demonstrate initial document scanning would not be completed by the September 30, 2009 deadline. Specifically, VA stated that our report portrayed the six interoperability objectives as the necessary steps to achieving full interoperability, even though the departments consider the objectives to be just one component of achieving full interoperability, along with existing data exchange capabilities. However, in discussing the objectives, we stated that according to the former acting director of the interagency program office, the departments consider achievement of the six objectives, in conjunction with capabilities previously achieved (e.g., FHIE, BHIE, CHDR), to be sufficient to satisfy the requirement for full interoperability by September 2009. With respect to the objective to demonstrate initial document scanning, the Chief of Staff stated that our report projects that the objective will not be met by the September deadline. However, while our report states that according to the acting program office director, additional work will be required beyond September to perform all the activities necessary to meet clinicians’ needs related to document scanning, we did not report that the departments would not meet this objective by the September deadline. In fact, our report noted that according to this official the departments expect to begin user testing at up to nine sites by September 2009, and that these activities are expected to demonstrate initial document scanning capability. Nonetheless, we revised our report as appropriate, in an attempt to more clearly reflect the departments’ intent with regard to this objective. DOD, VA, and the interagency program office also provided technical comments on the draft report, which we incorporated as appropriate. The departments and the DOD/VA Interagency Program Office comments are reproduced in app. II, app. III, and app. IV, respectively. We are sending copies of this report to the Secretaries of Defense and Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-6304 or melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To evaluate the Department of Defense’s (DOD) and Veterans Affairs’ (VA) progress toward developing electronic health record systems or capabilities that allow for full interoperability of personal health care information, we reviewed our previous work on DOD and VA efforts to develop health information systems, interoperable health records, and interoperability standards to be implemented in federal health care programs. We obtained and analyzed agency documentation and interviewed program officials to determine DOD’s and VA’s progress towards achieving full interoperability by September 30, 2009, as required by the National Defense Authorization Act for Fiscal Year 2008. We also analyzed information gathered from agency documentation to identify interoperability objectives, milestones, and target dates for ongoing and planned interoperability initiatives whose target dates extend beyond September 30, 2009. In addition, through interviews with cognizant DOD and VA officials, we obtained and assessed information regarding the departments’ plans for achieving full interoperability of electronic health information. To determine whether the interagency program office is positioned to serve as a single point of accountability for developing and implementing electronic health records, we obtained and reviewed program office documentation, including its charter and standard operating procedures. We compared the responsibilities identified in the charter with actions taken by the office to exercise the responsibilities. Additionally, we interviewed interagency program office officials to determine the status of filling leadership and staffing positions within the office. We conducted this performance audit at DOD and VA locations in the greater Washington, D.C., metropolitan area from April through July 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, key contributions to this report were made by Mark Bird, Assistant Director; Rebecca Eyler; Lee McCracken; Michael Redfern; J. Michael Resser; Kelly Shaw; Eric Trout; and Merry Woo.
The National Defense Authorization Act for Fiscal Year 2008 required the Department of Defense (DOD) and the Department of Veterans Affairs (VA) to accelerate their exchange of health information and to develop systems or capabilities that allow for interoperability (generally, the ability of systems to exchange data) by September 30, 2009. It also required compliance with federal standards and the establishment of a joint interagency program office to function as a single point of accountability for the effort. Further, the act directed GAO to semiannually report on the progress made in achieving these requirements. For this third report, GAO evaluated (1) the departments' progress and plans toward sharing fully interoperable electronic health information that comply with federal standards and (2) whether the interagency program office is positioned to function as a single point of accountability. To do so, GAO analyzed agency documentation on project status and conducted interviews with agency officials. DOD and VA have taken steps to meet six objectives that they identifiedforachieving full interoperability in compliance with applicable standards (see table) by September 30, 2009. Specifically, the departments have achieved planned capabilities for three of the objectives--refine social history data, share physical exam data, and demonstrate initial network gateway operation. For the remaining three objectives, the departments have partially achieved planned capabilities, with additional work needed to fully meet the objectives. Regarding the objective to expand questionnaires and self-assessment tools, this additional work is intended to be completed by the deadline. The departments' officials have stated that they intend to meet the objectives to expand DOD's inpatient medical records system and demonstrate initial document scanning; however, additional work will be required beyond September to perform all the activities necessary to meet clinicians' needs for health information. The DOD/VA Interagency Program Office is not yet effectively positioned to function as a single point of accountability for the implementation of fully interoperable electronic health record systems or capabilities between DOD and VA. While the departments have made progress in setting up the office by hiring additional staff, they continue to fill key leadership positions on an interim basis. Further, while the office has begun to demonstrate responsibilities outlined in its charter, it is not yet fulfilling key information technology management responsibilities in the areas of performance measurement (as GAO previously recommended), project planning, and scheduling, which are essential to establishing the office as a single point of accountability for the departments' interoperability efforts.
During calendar year 2010, HHS had 6,697 employees who were appointed under sections 209(f) or (g).served at the National Institutes of Health (NIH), the Food and Drug Administration (FDA), or the Centers for Disease Control and Prevention (CDC), while the remaining employees served in the Office of the Secretary or within other operating divisions, as shown in figure 1. Congress provided EPA with the authority to use Title 42 to employ up to 30 persons at any one time through fiscal year 2015. At the time of our study, EPA had appointed 17 fellows in its Office of Research and Development from 2006 to 2011 under section 209(g) and all 17 fellows remained with EPA. Appointments for the three fellows hired in 2006 have been renewed for another 5-year term. Figure 2 shows the cumulative onboard Title 42 staff, by new hire or conversion. According to EPA officials, the agency has identified mission critical personnel needs and is actively recruiting to fill the 13 remaining authorized Title 42 positions. The agency has no plans to use authority under section 209(f) at this time, but may consider it in the future. Officials told us EPA would need to develop guidance for implementing section 209(f) before using the authority. Title 42 fellows at EPA lead scientific research initiatives, are considered experts in the related scientific discipline, and some manage or direct a division or office. According to EPA officials, Title 42 provides two important tools EPA needs to achieve its mission: (1) the flexibility to be competitive in recruiting top experts who are also sought after by other federal agencies, private industry, and academia; and (2) the appointment flexibility needed to align experts with specific skills to changing scientific priorities. EPA officials stated it is not the agency’s intention to hire a fellow long-term under Title 42, but rather employ the individual as long as a priority remains high. Annual salaries for Title 42 fellows at EPA range from approximately $153,000 to $216,000, with an average salary of about $176,000 and a median salary of about $171,000. As shown in table 4, 15 of the 17 EPA fellows had salaries exceeding Executive Level IV. In December 2010, EPA began a pilot of using market salary data to estimate salaries of what Title 42 candidates could earn in positions outside of government given their education, experience, professional standing, and other factors. EPA used the market salary data to inform salary negotiations for the five fellows appointed since the implementation of the pilot. According to EPA officials, the market salary pilot concludes in December 2012 and its effect will be analyzed at that time. In appointing Title 42 fellows, EPA generally followed appointment guidance described in its Title 42 Operations Manual. EPA could, however, improve procedures for resolving potential conflicts of interest. We conducted 10 case file reviews of EPA Title 42 employees and in two cases we reviewed, employees had potential conflict of interest situations arise after appointment resulting, in part, from the agency’s failure to ensure Title 42 employees followed agreed upon ethics requirements. EPA acknowledged it could improve its postappointment ethics oversight and reported it has plans to ensure that Title 42 employees follow requirements such as submitting confirmation of stock divestitures to its General Counsel, for example, and other ethics requirements. However, at the time of our review, EPA had not provided us with implementation plans or timeframes for its improved oversight. To address this issue, we recommended that EPA, as part of its efforts to improve postappointment ethics oversight, develop and document a systematic approach for ensuring Title 42 employees are compliant with ethics requirements after appointment. EPA disagreed with our recommendation, citing certain actions already taken, such as a plan to require proof of compliance with ethics agreements. We acknowledged EPA’s plans to address these issues, but maintained the recommendation was needed to ensure implementation because the two ethics issues we reported occurred over 2 years ago. Our legal opinion, issued on July 11, 2012, responded to a Congressional request for our views on whether there are statutory caps on pay for consultants and scientists appointed pursuant to 42 U.S.C. §§ 209(f) or (g). We concluded that an appropriations law provision enacted as part of the Fiscal Year 1993 Labor-HHS-Education Appropriations Act established a permanent appropriation cap on the pay of individuals appointed on a limited-time basis under 42 U.S.C. §§ 209(f) or (g) at agencies funded through that Act. With regard to individuals not subject to this cap, we concluded further that two other pay limitations set forth in Title 5 of the U.S. Code that we considered do not apply to appointments made pursuant to 42 U.S.C. §§ 209(f) or (g). Federal pay systems are extremely complex, and we encountered challenges in attempting to resolve ambiguities arising from pay laws enacted at different times over nearly 70 years. Sections 209(f) and (g) of title 42 were enacted in 1944 and have not been amended since that time. There have, however, been many significant changes in related laws and regulations that were relevant to our consideration of the issues raised. Consequently, we conducted extensive research of legislative history to aid in our understanding of congressional actions and the interplay of the laws addressed below, and examined regulations issued pursuant to these provisions over the last 65 years. We also solicited the views of HHS, the Office of Personnel Management (OPM), and the EPA. The appropriations for each fiscal year from 1957 through 1993 included a cap on pay for “consultants or individual scientists appointed for limited periods of time” (underscoring added) pursuant to 42 U.S.C. §§ 209(f) or (g). The appropriations for fiscal year 1993 established a permanent cap on such compensation, providing that pay may be set at rates not to exceed “the per diem rate equivalent to the maximum rate payable for senior-level positions under 5 U.S.C. § 5376.” This cap currently limits base pay to $155,500. Our review of the legislative history of the first appropriation to contain the limit indicated that it was enacted due to other restrictions in law on compensation as an increase over then-existing pay authority. We considered the meaning of the phrase “for limited periods of time,” which has appeared in all of the relevant appropriations provisions from 1956 to 1993. In 1956, when this language was first included in the appropriations law, the Public Health Service’s regulations included time limitations on employment. Thus the time limit generally applied to all consultant appointments made under section 209(f) beginning in 1947, when the regulation containing the limit was first promulgated, unless “special circumstances” led the administrator to approve an extension. Further, the limit was in effect in 1956, when the first appropriations law provision referring to consultants appointed for “limited periods of time” was enacted. However, this time limitation was removed from the regulations in 1966. 31 Fed. Reg. 12,939 (Oct. 5, 1966). Therefore, the appropriations pay cap applied to all section 209(f) consultants from 1956 until HHS changed the regulations in 1966 allowing for the hiring of consultants for indefinite periods. Although the regulations implementing section 209(f) no longer included a time limitation on the employment of special consultants after 1966, the appropriations provisions for 1967 and subsequent years, using virtually identical language each year, imposed a cap only on pay of “consultants or individual scientists appointed for limited periods of time pursuant to .” The appropriations restriction did not impose any cap on pay for those consultants whose appointments were not limited in time. As a result, after the 1966 regulations were promulgated and continuing to the present, HHS has employed two categories of consultants: those appointed for limited periods of time, to whom the pay cap applies, and consultants appointed for indefinite periods, to whom the pay cap does not apply. Importantly, the appropriations pay restriction is applicable only to payments made from Labor-HHS-Education Appropriations Acts. Three components of the Public Health Service (the Agency for Toxic Substances and Disease Registrations, the Food and Drug Administration, and the Indian Health Services) are funded by appropriations acts other than the Labor-HHS-Education Appropriations Act, and are not covered by a restriction on funds appropriated under that Act. Thus, we concluded that there is a cap of Executive Level IV on the pay of consultants and scientists employed for limited periods of time pursuant to 42 U.S.C. §§ 209(f) or (g) in all but three of the Public Health Service Agencies. With respect to individuals not covered by the appropriation cap, we examined the applicability of two pay limitations found in title 5: section 3109, which limits pay for consultants “procure” on a temporary or intermittent basis, and section 5373, which limits pay fixed by administrative action. Section 3109, enacted in 1946, establishes specific legal parameters, including a pay cap and a limit on appointment duration, governing the employment of experts or consultants whose appointment must be authorized by an “appropriation or other statute.” That pay cap applies unless a different cap is authorized by the appropriation or another statute. Beginning in 1956, Congressional actions signaled that section 3109 did not apply to section 209(f) appointments. From1956 and continuing until 1993, Congress enacted provisions yearly in appropriations acts that set a cap (which may or may not have been higher than that found in section 3109 in any given year) for all those appointed pursuant to sections 209(f) or (g) for a limited period of time and funded out of the Labor-HHS- Education Appropriations Act. From fiscal year 1970 until the provisions became permanent in fiscal year 1993, the appropriations acts for HHS contained separate provisions placing identical compensation limits for experts and consultants subject to 5 U.S.C. § 3109, and for consultants and scientists appointed for limited periods of time pursuant to 42 U.S.C. §§ 209(f) or (g). Identical provisions would have been unnecessary if Congress believed that the limitations in 5 U.S.C. § 3109 would apply to 42 U.S.C. §§ 209(f) and (g) consultants or scientists. Further, in 1992, Congress added subsection (d) to section 3109. It directs OPM to prescribe regulations necessary to administer section 3109. OPM subsequently issued regulations which provide that section 3109 does not apply to the appointment of experts or consultants under other authorities. 5 C.F.R. § 304.101. It also informed us that it “does not consider the cap under 5 U.S.C. § 3109 to apply to consultants under 42 U.S.C. § 209(f).” This interpretation is entitled to considerable weight since OPM is the agency charged with administering section 3109. Based on our review, we found that Congress had not spoken directly on the applicability of section 3109 to the authorities in 42 U.S.C. 209(f) and (g) and that OPM’s interpretation was reasonable. Therefore, we concluded that the provisions of section 3109 do not apply to consultants employed pursuant to 42 U.S.C. § 209(f). The other pay cap that we considered is found in section 5373 of title 5 of the United States Code, which places limits on pay fixed by administrative action. Pay fixed by administrative action refers to the various pay-setting authorities in which pay is determined by the agency instead of pursuant to pay rates under otherwise applicable statutory pay systems, such as the General Schedule. Congress first enacted section 5373 in 1964, 20 years after it passed sections 209(f) and (g). Section 5373 limits pay set by administrative action to no more than the rate for level IV of the Executive Schedule, and lists specific pay authorities which are excepted from coverage. The rate for level IV of the Executive Schedule is currently $155,500 per year. 42 U.S.C. §§ 209(f) and (g) are not among the authorities explicitly excepted from section 5373. We looked at multiple issues in determining that the section 5373 cap does not apply to 42 U.S.C. §§ 209(f) or (g) appointees. We found no evidence that Congress had considered the section 209 authorities when the administrative pay cap was enacted. Sections 209(f) and (g) allow for compensation “without regard to the Classification Act of 1923.” We parsed laws enacted in 1923 and later to see if this language should be interpreted to create an exemption from section 5373, which of course was enacted over 40 years after the Classification Act of 1923, and after several additional pay laws had also been enacted. Finally, we looked at Congressional action in appropriations passed from 1964 through 1993, and in extending section 209 authority to EPA in 2005 and in 2009. These Congressional actions led us to believe that it did not intend for the 5 U.S.C. § 5373 pay cap to apply to consultants and scientists hired pursuant to 42 U.S.C. §§ 209(f) and (g). Given the evidence of how Congress viewed the authority, we did not object to HHS’s interpretation that the 1993 appropriations cap is the only restriction on its authority to compensate individuals appointed under 42 U.S.C. §§ 209(f) or (g). In conclusion, with respect to the first issue, the 1993 appropriations act unequivocally limits the pay of consultants and scientists appointed for limited periods of time pursuant to 42 U.S.C. §§ 209(f) or (g) at agencies that are funded by Labor-HHS-Education Appropriations Acts. With regard to the two title 5 limitations, we think that the pay limitations do not apply to appointments made pursuant to 42 U.S.C. §§ 209(f) or (g). The statutory pay provisions we analyzed, as mentioned earlier, were enacted over the course of nearly 70 years, and are in different federal pay systems. As one court has observed, “although some pay systems are ‘linked’ to one another,” they have not been “fastidiously integrated” to achieve uniform federal compensation policies.” In this case, the issues raised – in particular the applicability of the two title 5 limitations on the title 42 authority to hire special consultants and fellows – reflect the difficulty of applying distinct statutory schemes to determine whether specific pay limits apply. Thus if Congress desires upper pay limits for appointments under sections 209(f) and (g), it may wish to consider amending these provisions to specifically establish such limits. Both HHS and EPA have used Title 42 to recruit and retain highly skilled, in-demand personnel to government service in order to execute their missions. At the same time, HHS’s lack of complete data and guidance on its use of Title 42 may limit the agency’s ability to strategically manage its use and provide oversight of the authority. Effective monitoring of the use of Title 42 is particularly important in light of HHS’s increasing use of the authority and the number of employees earning salaries higher than most federal employees. EPA generally followed its Title 42 policies and has incorporated some modifications to improve its appointment and compensation practices; however, EPA’s current ethics guidance does not sufficiently ensure Title 42 employees meet ethics requirements after appointment. EPA acknowledged it could improve its post-appointment ethics oversight and reported it has plans to ensure that Title 42 employees send its General Counsel confirmation of stock divestitures and other ethics requirements. However, at the time of our review, EPA had not provided us with implementation plans or timeframes. Although its plans appear to be prudent steps for addressing the specific issues that arose in the cases we reported, it will be important for EPA to implement them as soon as possible to mitigate the risk of future potential conflict of interest issues. Going forward, our recommendations to HHS and EPA to strengthen certain practices under Title 42, if implemented, should help strengthen the management and oversight of this special hiring authority. Chairman Pitts, Ranking Member Pallone, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions you or others may have at this time. For further information regarding this statement, please contact Robert Cramer, Managing Associate General Counsel, at (202) 512-7227, or Cramerr@gao.gov, or Robert Goldenkoff, Director, Strategic Issues, at (202) 512-2757, or Goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Trina Lewis, Assistant Director; Shea Bader, Analyst-In- Charge; Dewi Djunaidy; Karin Fangman; and Sabrina Streagle. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
HHS and EPA have been using special hiring authority provided under 42 U.S.C. §§209(f) and (g)referred to in this testimony as Title 42to appoint individuals to fill mission critical positions in science and medicine and, in many cases, pay them above salary limits usually applicable to federal government employees. GAO was asked to review the extent to which HHS and EPA have (1) used authority under Title 42 to appoint and compensate employees since 2006, and (2) followed applicable agency policy, guidance, and internal controls for appointments and compensation. GAO was also asked to determine if there are statutory caps on pay for consultants and scientists appointed pursuant to Title 42. This testimony is based on GAOs July 2012 report (GAO-12-692) and a legal opinion on whether there are statutory caps on pay for consultants and scientists appointed pursuant to 42 U.S.C. §§ 209(f) or (g). (B-3223357) The Department of Health and Human Services’ (HHS) use of special hiring authorities under 42 U.S.C. §§ 209(f) and (g) has increased in recent years, from 5,361 positions in 2006 to 6,697 positions in 2010, an increase of around 25 percent. Nearly all HHS Title 42 employees work in one of three HHS operating divisions: the National Institutes of Health (NIH), the Food and Drug Administration (FDA), and the Centers for Disease Control and Prevention (CDC). Title 42 employees at HHS serve in a variety of areas, including scientific and medical research support and in senior, director-level leadership positions. At NIH, one-quarter of all employees, and 44 percent of its researchers and clinical practitioners, were Title 42 appointees. HHS reported that Title 42 enables the agency to quickly fill knowledge gaps so medical research can progress and to respond to medical emergencies. HHS further reported Title 42 provides the compensation flexibility needed to compete with the private sector. In 2010, 1,461 of HHS’s Title 42 employees earned salaries over $155,500. The highest base pay amount under the General Schedule – the system under which most federal employees are paid – was $155,500 in 2010. Under certain types of Title 42 appointments, statutory pay caps may apply. 2010 was the last year of HHS data available at the time of GAO’s review. HHS does not have reliable data to manage and provide oversight of its use of Title 42. Moreover, HHS did not consistently adhere to certain sections of its Title 42 section 209(f) policy. For example, the policy states that 209(f) appointments may only be made after non-Title 42 authorities have failed to yield a qualified candidate, but GAO found few instances where such efforts were documented. HHS has recently issued updated 209(f) policy that addresses most of these issues. HHS is developing agencywide policy for appointing and compensating employees under Title 42 section 209(g), but it is not clear the policy will address important issues such as documenting the basis for compensation. Since 2006, the Environmental Protection Agency (EPA) has used section 209(g) to appoint 17 employees. Fifteen of EPA’s 17 Title 42 employees earned salaries over $155,500 in 2010. EPA appointment and compensation practices were generally consistent with its guidance; however, EPA does not have post-appointment procedures in place to ensure Title 42 employees meet ethics requirements to which they have previously agreed. In its legal opinion, GAO concluded that an appropriations pay cap applies to certain, but not all, employees appointed under 42 U.S.C. §§ 209(f) and (g). If Congress desires upper pay limits for appointments not currently subject to the pay cap, it may wish to consider legislation to specifically establish such limits. In the report on which this testimony is based, GAO made recommendations to HHS to improve oversight and management of its Title 42 authority and a recommendation to EPA to improve enforcement of its ethics requirements. HHS agreed with GAO’s recommendations, while EPA disagreed, citing actions already taken. GAO acknowledged EPA’s plans to address these issues, but maintained the recommendation was needed to ensure implementation.
In addition to 90-day petition findings, 12-month status reviews, listings, and delistings, other key categories of ESA decisions include critical habitat designations, recovery plans, section 7 consultations, and habitat conservation plans (see table 1). Service staff at headquarters, eight regional offices, and 81 field offices are largely responsible for implementing the ESA. Field office staff generally draft ESA decisions; listing, delisting, and critical habitat decisions are forwarded to regional and headquarters offices for review. Service headquarters forwards listing decisions to Interior’s Office of Assistant Secretary for Fish and Wildlife and Parks for review, although it is the Service Director who generally approves the final decisions. The Assistant Secretary of the Interior for Fish and Wildlife and Parks makes final critical habitat decisions, after considering the recommendation of the Service and considering economic, national security, and other factors. Although the Service is responsible for making science-based decisions, Interior takes responsibility for applying policy and other considerations to scientific recommendations. In most cases, ESA decisions must be based at least in part on the best available scientific information (see table 1). To ensure that the agency is applying the best available scientific information, the Service consults with experts and considers information from federal and state agencies, academia, other stakeholders, and the general public; some ESA decisions are both “peer reviewed” and reviewed internally to help ensure that they are based on the best available science. Nevertheless, because of differing interpretations of “best available scientific information” and other key concepts from the ESA such as “substantial” and “may be warranted,” conservation advocacy groups have expressed concerns that ESA decisions are particularly vulnerable to political interference from officials within Interior. While Ms. MacDonald was at Interior in two positions from July 7, 2002, through May 1, 2007, she reviewed more than 200 ESA decisions. After a May 9, 2007, congressional hearing, Interior’s Deputy Secretary directed the Service Director to examine all work products produced by the Service and reviewed by Ms. MacDonald that could require additional review because of her involvement. Service Director Hall said the selection process should include any type of ESA decision made during Ms. MacDonald’s time in office. He delegated the selection process to the regional directors and granted them considerable discretion in making their selections for potential revision. The regions generally applied three criteria to identify decisions for potential revision: (1) Ms. MacDonald influenced the decision directly, (2) the scientific basis of the decision was compromised, and (3) the decision was significantly changed and resulted in a potentially negative impact on the species. Using these criteria, the Service ultimately selected eight decisions for further review to determine if the decision warranted revision. After further review, the Service concluded that seven of the eight decisions warranted revision (see table 2). Several types of decisions were excluded from the Service’s review of decisions that may have been inappropriately influenced. First, while the Service focused solely on Ms. MacDonald, we found that other Interior officials also influenced some ESA decisions. Ms. MacDonald was the primary reviewer of most ESA decisions during her tenure, but other Interior officials were also involved. For example, in the Southeast, after reviewing a petition to list the Miami blue butterfly on an emergency basis, Service officials at all levels supported a recommendation for listing the species. Citing a Florida state management plan and existence of a captive- bred population, however, an Interior official other than Ms. MacDonald determined that emergency listing was not warranted, and the blue butterfly was instead designated as a candidate, not a listed species. Second, the Service excluded policy decisions that limited the application of science, focusing instead only on those decisions where the scientific basis of the decision may have been compromised. Under Ms. MacDonald, several informal policies were established that influenced how science was to be used when making ESA decisions. For example, a practice was developed that Service staff should generally not use or cite recovery plans when developing critical habitat designations. Recovery plans can contain important scientific information that may aid in making a critical habitat designation. One Service headquarters official explained, however, that Ms. MacDonald believed that recovery plans were overly aspirational and included more land than was absolutely essential to the species’ recovery. Under another informal policy, the ESA wording “occupied by the species at the time it is listed” was narrowly applied when designating critical habitat. Service biologists were restricted to interpreting occupied habitat as only that habitat for which they had records showing the species to be present within specified dates, such as within 10 years of when the species was listed. In the case of the proposed critical habitat for the bull trout, Ms. MacDonald questioned Service biologists’ conclusions about the species’ occupied habitat. As a result, some proposed critical habitat areas were removed, in part because occupancy by the species could not be ascertained. Third, the Service excluded decisions that were changed but not significantly or to the point of negative impact on the species. For example, under Ms. MacDonald’s influence, subterranean waters were removed from the critical habitat designation for Comal Springs invertebrates. Service staff said they believed that the exclusion of subterranean waters would not significantly affect the species because aboveground waters were more important habitat. They also acknowledged that not much is known about these species’ use of subterranean waters. Finally, we identified several other categories of decisions that, in some or all cases, were excluded from the Service’s selection process. For example, in some cases that we identified, decisions that had already been addressed by the courts were excluded from the Service’s selection process; decisions that could not be reversed were also excluded. In the case of the Palos Verdes blue butterfly, Navy-owned land that was critical habitat was exchanged after involvement by Ms. MacDonald in a section 7 consultation. As a result, the habitat of the species’ last known wild population was destroyed by development, and therefore reversing the decision would not have been possible. Additionally, decisions were excluded from the Service’s selection process if it was determined that review would not be an efficient use of resources or if it could not be conclusively determined that Ms. MacDonald altered the decision. Several Service staff cited instances where they believed that Ms. MacDonald had altered decisions, but because the documentation was not clear, they could not ascertain that she was responsible for the changes. Additionally, decisions that were implicitly attributed to Ms. MacDonald were excluded from the selection process. Service staff described a climate of “Julie- proofing” where, in response to continual questioning by Ms. MacDonald about their scientific reasoning, they eventually learned to anticipate what might be approved and wrote their decisions accordingly. While the Service’s May 2005 informal guidance had no substantive effect on the processing of 90-day petition findings, the Service still faces several other challenges in processing these petitions. Stakeholders have expressed concern that the wording of the May 2005 guidance was slanted more toward refuting petitioners’ listing claims, rather than encouraging Service biologists to use information to both support and refute listing petitions; consequently, they feared that a greater number of negative 90-day petition findings would result. According to a senior Service official, it was never the Service’s position that information collected to evaluate a petition could be used to support only one side, specifically, only to refute the petition. Rather, according to a senior Service official, its position is and has been that additional collected information can be used to either support or refute information presented in the petition; any additional information is not, however, to be used to augment or supplement a “weak” petition by raising new issues not already presented. According to the ESA, the petition itself must present “substantial scientific or commercial information indicating that the petitioned action may be warranted.” Our survey of Service biologists responsible for drafting the 90-day petition findings issued from 2005 through 2007 found that the biologists generally used additional information, as applicable, to support as well as refute information in the petitions. The Service is facing several challenges with regard to the processing of 90-day petition findings. In particular, the Service finds it difficult to issue decisions within the desired 90-day time frame and to adjust to various court decisions issued in the last 4 years. In our survey of 44 Service biologists who prepared 54 90-day petition findings from 2005 through 2007, we found that additional information collected to evaluate the petitions was generally used, as applicable, to both support and refute information in the petitions, including during the 18-month period when the May 2005 informal guidance was being used. The processing of 90-day petition findings is governed by the ESA, federal regulations, and various guidance documents distributed by the Service. To direct the implementation of the law and regulations, and to respond to court decisions, the Service issues guidance, which is implemented by Service staff in developing 90-day petition findings. This guidance can come in formal policies and memorandums signed by the Service Director, or informal guidance not signed by the Director but distributed by headquarters to clarify what information should be used and how it should be used in processing petitions. In July 1996, the Service issued a formal policy, called Petition Management Guidance, governing 90-day petition findings and 12-month status reviews. A component of this document was invalidated by the District of Columbia district court in June 2004. According to senior Service officials, since 2004 the Service has distributed a series of instructions through e-mails, conference calls, and draft guidance documents to clarify the development of 90-day petition findings. For example, in May 2005, the Service distributed via e-mail an informal guidance document that directed its biologists to create an outline listing additional information—that is, information not cited or referred to in a petition—that refuted statements made in the petition; biologists were not to list in the outline any additional information that may have clarified or supported petition statements. We identified a universe of 67 90-day petition findings issued by the Service from 2005 through 2007. To focus on how the Service used information to list or delist U.S. species, we surveyed Service biologists responsible for drafting 54 of the 67 90-day petition findings. For the 54 90-day petitions included in our survey, 40 were listing petitions, and 14 were delisting petitions; 25 resulted in positive 90-day petition findings, and 29 resulted in negative 90-day petition findings (see table 3). In November 2006, the Service distributed new draft guidance on the processing of 90-day petitions, which specified that additional information in Service files could be used to refute or support issues raised in the petition but not to “augment a weak petition” by introducing new issues. For example, if a 90-day petition to list a species claimed that the species was threatened by predation and habitat loss, the Service could not supplement the petition by adding information describing threats posed by disease. The May 2005 informal guidance was thus in use until this November 2006 guidance was distributed, or approximately 18 months. Our survey results showed that in most cases, the additional information collected by Service biologists when evaluating 90-day petitions was used to support as well as refute information in petitions (see table 4). According to the Service biologists we surveyed, additional information was used exclusively to refute information in 90-day petitions in only 8 of 54 cases. In these 8 cases, the biologists said, this approach was taken because of the facts, circumstances, and the additional information specific to each petition, not because they believed that it was against Service policy to use additional information to support a petition. In particular, with regard to the 4 petitions processed during May 2005 through November 2006 for which additional information was used exclusively to refute petition information, the biologists stated that the reasons they did not use information to support claims made in the petition was that either the claims themselves did not have merit or the information reviewed did not support the petitioner’s claims. Three of the four biologists responsible for these petitions also stated that they did not think it was against Service policy to use additional information to support issues raised in a petition. The fourth biologist was uncertain whether it was against Service policy to support issues raised in a petition. While the May 2005 informal guidance did not have a substantive effect on the Service’s processing of 90-day petitions, the Service still faces challenges in processing 90-day petitions in a timely manner and in responding to court decisions issued since 2004. None of the 90-day petition findings issued from 2005 through 2007 were issued within the desired 90-day time frame. During this period, the median processing time was 900 days, or about 2.5 years, with a range of 100 days to 5,545 days (more than 15 years). According to Service officials, almost all of their ESA workload is driven by litigation. Petitioners have brought a number of individual cases against the Service for its failure to respond to their petitions in a timely manner. This issue presents continuing challenges because the Service’s workload increased sharply in the summer of 2007, when it received two petitions to list 475 and 206 species, respectively. The Service is also facing several management challenges stemming from a number of court decisions since 2004: According to senior Service officials, the Service currently has no official guidance on how to develop 90-day petition findings, partially because of a 2004 court decision invalidating part of the Service’s 1996 Petition Management Guidance. The Service’s official 1996 Petition Management Guidance contained a controversial provision that treated 90-day petitions as “redundant” if a species had already been placed on the candidate list via the Service’s internal process. In 2004, a federal district court issued a nationwide injunction striking down this portion of the guidance. Senior service officials stated that the Service rescinded use of the document in response to this court ruling and began an iterative process in 2004 to develop revised guidance on the 90-day petition process. According to these officials, guidance was distributed in piecemeal fashion, dealing with individual aspects of the process in the form of e-mails, conference-call discussions, and various informal guidance documents. Our survey respondents indicated that the lack of official guidance created confusion and inefficiencies in processing 90-day petitions. Specifically, survey respondents were confused on what types of additional information they could use to evaluate 90-day petitions—whether they were limited to information in Service files, or whether they could use information solicited from their professional contacts to clarify or expand on issues raised in the petition. Several survey respondents also stated that unclear and frequently changing guidance resulted in longer processing times for 90-day petition findings, which was frustrating because potentially endangered species decline further as the Service determines whether they are worthy of protection. Further complicating matters, 31 of the 44 biologists we surveyed, or 70 percent, had never drafted a 90-day petition finding before. According to a senior Service official, the Service is planning to issue official guidance on how 90-day petition findings should be developed to eliminate confusion and inconsistencies. With regard to the use of outside information in evaluating petitions, the Service must continue to adapt to a number of court decisions dating back to 2004 holding that the Service should not solicit information from outside sources in developing 90-day petition findings. A December 2004 decision by the U.S. District Court for the District of Colorado stated that the Service’s “consideration of outside information and opinions provided by state and federal agencies during the 90-day review was overinclusive of the type of information the ESA contemplates to be reviewed at this stage . . . , those petitions that are meritorious on their face should not be subject to refutation by information and views provided by selected third parties solicited by .” Since then, several other courts have reached similar conclusions. Despite the constancy of various courts’ holdings, 25 out of the 54 90-day petition findings in our survey, or 46 percent, were based in part on information from outside sources, according to Service biologists. The Service’s May 2005 informal guidance directed biologists to use information in Service files or “other information,” which the guidance did not elaborate on. The Service’s November 2006 draft guidance stated that biologists should identify and review “readily available information within Service files” as part of evaluating information contained in petitions. The biologists we surveyed expressed confusion and lack of consensus on the meaning of the terms “readily available” and “within Service files.” Some Service officials were concerned that if information solicited from outside sources could not be considered in developing 90-day petition findings, many more 90-day petitions would be approved and moved forward for in-depth 12-month reviews, further straining the Service’s limited resources. In addition, the Service must continue to adapt to a number of court decisions since 2004 on whether it is imposing too high a standard in evaluating 90-day petitions. This issue—essentially, what level of evidence is required at the 90-day petition stage and how this evidence should be evaluated—goes hand in hand with the issue of using additional information outside of petitions in reaching ESA decisions. In overturning three negative 90-day petition findings, three recent court decisions in 2006 and 2007 have held, in part, that the Service imposed too high a standard in evaluating the information presented in the petitions. These court decisions have focused on the meaning of key phrases in the ESA and federal regulations, such as “substantial” information, “a reasonable person,” and “may be warranted.” In 2006, the U.S. District Court for the District of Montana concluded that the threshold necessary to pass the 90-day petition stage and move forward to a 12-month review was “not high.” Again, some Service officials are concerned that these recent court decisions may lead to approval of more 90-day petitions, thus moving them forward for in-depth 12-month reviews and straining the Service’s limited resources. Beyond these general challenges, the Service’s 90-day petition finding in a recent case involving the Sonoran Desert population of the bald eagle has come under severe criticism by the U.S. District Court for the District of Arizona. The court noted that Service scientists were told in a conference call that headquarters and regional Service officials had reached a “policy call” to deny the 90-day petition and that “we need to support .” A headquarters official made this statement even though the Service had been unable to find information in its files refuting the petition and even though at least some Service scientists had concluded that listing may be warranted. The court stated that the Service participants in a July 18, 2006, conference call appeared to have received “marching orders” and were directed to find an analysis that fit a 90-day finding that the Sonoran Desert population of the bald eagle did not constitute a distinct population segment. The court stated that “these facts cause the Court to have no confidence in the objectivity of the agency’s decision-making process in its August 30, 2006, 90-day finding.” In contrast, in a September 2007 decision, the U.S. District Court for the District of Idaho upheld the Service’s “not substantial” 90-day petition findings on the interior mountain quail distinct population segment. Of the eight U.S. species delisted from 2000 through 2007 because of recovery, the Service reported that recovery criteria were completely met for five species and partially met for the remaining three species. When the delistings were first proposed, however, the respective recovery criteria for only two of the eight species had been completely met. Although the ESA does not specifically require the Service to meet recovery criteria before delisting a species, courts have held that the Service must address the ESA’s five threat factors for listing/delisting, to the maximum extent practicable, in developing recovery criteria. For each of the delisted species that we reviewed, the Service determined that the five threat factors listed in the ESA no longer posed a significant enough threat to the continued existence of the species to warrant continued listing as threatened or endangered. Table 5 summarizes whether the recovery criteria for the eight species delisted from 2000 through 2007 were partially or completely met at the proposed rule stage and the final rule stage. At the proposed rule stage, only two of the eight species had completely met their respective recovery criteria; that fraction increased to five of eight at the final rule stage. The period between the proposed rules and the final rules ranged from less than 1 year for the gray wolf’s western Great Lakes distinct population segment to just over 8 years for the bald eagle. For the species where the criteria were not completely met before final delisting, the Service indicated that the recovery criteria were outdated or otherwise not feasible to achieve. For example, the recovery plan for the Douglas County population of Columbian white-tailed deer was originally developed in 1976 and later updated in 1983. The recovery plan recommended maintaining a minimum population of 500 animals distributed in suitable, secure habitat within Oregon’s Umpqua Basin. The Service reported it was not feasible to demonstrate, without considerable expense, that 500 specific deer live entirely within secure lands managed for their benefit, for most deer move between public and private lands. Even though this specific recovery criterion was not met, the Service indicated that the species warranted delisting because of the overall increase in its population and amount of secure habitat. The West Virginia northern flying squirrel, whose final delisting decision was pending at the time of our review, offers an example of a species proposed for delisting even though the recovery criteria have not been met. The species was proposed for delisting on December 19, 2006. The squirrel’s recovery plan was developed in 1990 and amended in 2001 to incorporate guidelines for habitat identification and management in the Monongahela National Forest, which supports almost all of the squirrel’s populations. The Service asserted that, other than the 2001 amendment, the West Virginia northern flying squirrel recovery plan is outdated and no longer actively used to guide recovery. This was in part because the squirrel’s known range at the time of proposed delisting was much wider than the geographic recovery areas designated in the recovery plan and because the recovery areas have no formal or regulatory distinction. In support of its delisting decision, the Service indicated that the squirrel population had increased and that suitable habitat had been expanding. The Service drew these conclusions largely on the basis of a 5-year review—an ESA-mandated process to ensure the continued accuracy of a listing classification—completed in 2006, and not on the basis of the squirrel’s 1990 recovery plan. The Service also reported that the recovery plan’s criteria did not specifically address the five threat factors. According to the Service, most recovery plan criteria have focused on demographic parameters, such as population numbers, trends, and distribution. While the Service acknowledges that these types of criteria are valid and useful, it also cautions that, by themselves they are not adequate for determining a species’ status. The Service reports that recovery can be accomplished via many paths and may be achieved even if not all recovery criteria are fully met. A senior Service official noted that the quality of recovery plans varies considerably, and some criteria may be outdated. Furthermore, Service officials also noted, recovery plans are fluid documents, and the plan’s respective criteria can be updated as new threat information about a particular species becomes available. While the ESA does not specifically require the Service to meet recovery criteria before delisting a species, courts have held that it must address each of the five threat factors to the maximum extent practicable when developing recovery criteria. In a 2006 report, we provided information on 107 randomly sampled recovery plans covering about 200 species. Specifically, we found that only 5 of the 107 reviewed recovery plans included recovery criteria that addressed all five threat factors. We recommended that in recovery planning guidance, the Service include direction that all new and revised recovery plans contain either recovery criteria to demonstrate consideration of all five threat factors or a statement about why it is not practicable to include such criteria. In January 2008, in response to our recommendation, the Director of the Service issued a memorandum requiring all new and revised recovery plans to include criteria addressing each of the five threat factors. In conclusion, Mr. Chairman, questions remain about the extent to which Interior officials other than Ms. MacDonald may have inappropriately influenced ESA decisions and whether broader ESA policies should be revisited. Under the original direction from Interior’s Deputy Secretary and the three selection criteria followed by the Service, a variety of ESA decisions were excluded from the selection process. Broadening the scope of the review might have resulted in the selection of more decisions, but it is unclear to what extent. The Service recognizes the need for official guidance on how 90-day petition findings should be developed to eliminate confusion and inconsistencies. The guidance will need to reflect the Service’s implementation of recent court decisions on how far the Service can go in collecting additional information to evaluate 90-day petitions and reflect what standards should be applied to determine if a petition presents “substantial” information. The need for clear guidance is more urgent than ever with the Service’s receipt in the summer of 2007 of two petitions to list 681 species. Assuming successful implementation of the Service’s January 2008 directive that recovery criteria be aligned with the five threat factors in the ESA, we believe that future delistings will more likely meet recovery criteria while also satisfying the ESA’s delisting requirements based on the five threat factors. We provided Interior with a draft of this testimony for review and comment. However, no comments were provided in time for them to be included as part of this testimony. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the Committee may have at this time. For further information, please contact Robin M. Nazzaro at (202) 512-3841 or nazzaror@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Jeffery D. Malcolm, Assistant Director; Eric A. Bachhuber; Mark A. Braza; Ellen W. Chu; Alyssa M. Hundrup; Richard P. Johnson; Patricia M. McClure; and Laina M. Poon. We are reporting on (1) what types of decisions, if any, were excluded from the U.S. Fish and Wildlife Service’s (Service) selection process of Endangered Species Act (ESA) decisions that were potentially inappropriately influenced; (2) the extent to which the Service’s May 2005 informal guidance affected the Service’s decisions on petitions to list or delist species; and (3) the extent to which the Service determined, before delisting, whether species met recovery criteria outlined in recovery plans. To address our first objective, we interviewed the Director of the Service, all eight regional directors, and key regional staff. Also, we conducted site visits, phone interviews, or both with ESA staff from ten field offices in five regions that were actively engaged in ESA decision making. Further, we reviewed documentation developed by Service headquarters, regions, and field offices about the selection process and the status of the Service’s review. In addition, we reviewed Service policies and procedures for making ESA decisions and reviewed other species-specific information. To address our second objective, we identified 67 90-day petition findings issued by the Service from 2005 through 2007 and conducted structured telephone interviews of current and former Service biologists responsible for drafting 90-day petition findings issued in that time frame. Of the 67, we excluded 13 petition findings from our survey: 5 had been overturned by the courts or were being redone as a result of a settlement agreement; 3 involved up-listing already protected species from threatened to endangered; 2 involved ongoing litigation; 2 involved species located outside the United States; and 1 involved a petition to revise a critical habitat designation for a species that was already protected. In total, we surveyed 44 biologists responsible for drafting 54 90-day petition findings. To identify the lead author responsible for drafting the 90-day petition findings in our survey, we contacted the field office supervisor at the office where the petition finding was drafted. The field office supervisor directed us to the biologist who was the lead author of the finding or, if that person was not available, a supporting or supervising biologist. Of the 44 biologists we surveyed, 39 were lead biologists in drafting the finding, 3 were supervising biologists, and 2 were supporting biologists. From February 1, 2008, and February 6, 2008, we pretested the survey with 5 biologists from three regions between, and we used their feedback to refine the survey. The five 90-day petition findings we selected for the pretest were all published in 2004 to most closely approximate, but not overlap with, our sample. They represented a balance between listing and delisting petitions, substantial and not substantial findings, and types of information used in evaluating the petition as stated in the Federal Register notice. We conducted the pretests through structured telephone interviews to ensure that (1) the questions were clear and unambiguous, (2) terms were precise, and (3) the questions were not sensitive and that the questions as phrased could be candidly answered. A GAO survey specialist also independently reviewed the questionnaire. Our structured interview questions were designed to obtain information about the process the Service uses in making 90-day petition findings under the ESA and the types of information used to draft each 90-day petition finding. Specifically, the structured questions focused on information that was not cited or referred to in a listing or delisting petition but was either internal to Service files or obtained from sources outside the Service. In each of these categories, we asked whether the information was used to support, refute, or raise new issues not cited in the petition. Table 6 summarizes the key questions we are reporting on that we asked during the structured interviews. We also asked other questions in the survey that we do not specifically report on; these questions do not appear in the table below. Our survey results demonstrated in several ways that the May 2005 guidance did not have a substantive effect on the outcomes of 90-day petition findings. First, Service biologists who chose not to use information outside of petitions to support claims made in the petitions said that Service policy had no influence on this choice. Second, when asked what guidance they followed in drafting their 90-day petition finding, very few respondents cited the May 2005 guidance, indicating that although this guidance may have been followed to create an internal agency outline, it did not have a substantive effect on the finding itself. Third, in response to our concluding, open-ended question, none of the biologists mentioned specific reservations about the May 2005 guidance. To address our third objective, we generated a list of all of the Service’s final delisting decisions published as rules in the Federal Register (and corresponding proposed delisting rules) from calendar years 2000 through 2007, to determine the number of species removed from the list of threatened and endangered species by the Service. As of December 31, 2007, the Service had issued final rules resulting in the delisting of 17 species. Of those 17 delisted species, 2 species were delisted because they had been declared extinct, 6 species were delisted because the original data used to list the species were in error, and 9 species were delisted as a result of recovery. Of the 9 recovered species, we excluded the Tinian monarch, a species located in a U.S. territory, which reduced the number of species we looked at to 8 U.S. species delisted because of recovery. To examine whether the Service met recovery criteria outlined in recovery plans before delisting species, we obtained and reviewed the Service’s recovery plans for each of those 8 delisted species and also examined the Federal Register proposed and final delisting rules. This information indicated whether the Service believed that it had met the criteria laid out in the recovery plans for the 8 delisted U.S. species. Finally, we also reviewed the proposed rule to delist the West Virginia northern flying squirrel; as of March 31, 2008, the Service had not finalized this proposed rule. We conducted this performance audit from August 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Arizona brome and nodding needlegrass 70 Fed. Reg. 3504 (Jan. 25, 2005) 70 Fed. Reg. 5123 (Feb. 1, 2005) 70 Fed. Reg. 5401 (Feb. 2, 2005) 70 Fed. Reg. 5959 (Feb. 4, 2005) 70 Fed. Reg. 20512 (Apr. 20, 2005) 70 Fed. Reg. 20512 (Apr. 20, 2005) 70 Fed. Reg. 35607 (June 21, 2005) 70 Fed. Reg. 38849 (July 6, 2005) Roundtail chub, lower Colorado River basin distinct population segment, and headwater chub 70 Fed. Reg. 39981 (July 12, 2005) 70 Fed. Reg. 44544 (Aug. 3, 2005) 70 Fed. Reg. 46467 (Aug. 10, 2005) 70 Fed. Reg. 46465 (Aug. 10, 2005) Gray wolf, northern Rocky Mountain distinct population segment 70 Fed. Reg. 61770 (Oct. 26, 2005) 70 Fed. Reg. 69303 (Nov. 15, 2005) 70 Fed. Reg. 71795 (Nov. 30, 2005) 70 Fed. Reg. 73190 (Dec. 9, 2005) 71 Fed. Reg. 315 (Jan. 4, 2006) American dipper, Black Hills, South Dakota, population 71 Fed. Reg. 4341 (Jan. 26, 2006) 71 Fed. Reg. 4337 (Jan. 26, 2006) 71 Fed. Reg. 6745 (Feb. 9, 2006) 71 Fed. Reg. 7497 (Feb. 13, 2006) 71 Fed. Reg. 7715 (Feb. 14, 2006) 71 Fed. Reg. 8252 (Feb. 16, 2006) 71 Fed. Reg. 9988 (Feb. 28, 2006) 71 Fed. Reg. 26444 (May 5, 2006) 71 Fed. Reg. 29908 (May 24, 2006) 71 Fed. Reg. 44988 (Aug. 8, 2006) 71 Fed. Reg. 44960 (Aug. 8, 2006) 71 Fed. Reg. 44980 (Aug. 8, 2006) 71 Fed. Reg. 44966 (Aug. 8, 2006) Sixteen insect species from the Algodones Sand Dunes, Imperial County, California 71 Fed. Reg. 47765 (Aug. 18, 2006) 71 Fed. Reg. 48900 (Aug. 22, 2006) 71 Fed. Reg. 56937 (Sept. 28, 2006) 71 Fed. Reg. 56932 (Sept. 28, 2006) 71 Fed. Reg. 58363 (Oct. 3, 2006) 71 Fed. Reg. 67318 (Nov. 21, 2006) 71 Fed. Reg. 70483 (Dec. 5, 2006) 71 Fed. Reg. 70479 (Dec. 5, 2006) Northern water snake, upper tidal Potomac River population 71 Fed. Reg. 70715 (Dec. 6, 2006) 71 Fed. Reg. 75215 (Dec. 14, 2006) 71 Fed. Reg. 75215 (Dec. 14, 2006) 72 Fed. Reg. 6699 (Feb. 13, 2007) 72 Fed. Reg. 6703 (Feb. 13, 2007) 72 Fed. Reg. 6998 (Feb. 14, 2007) Longnose sucker, Monongohela River population 72 Fed. Reg. 10477 (Mar. 8, 2007) 72 Fed. Reg. 29933 (May 30, 2007) 72 Fed. Reg. 31256 (June 6, 2007) 72 Fed. Reg. 31264 (June 6, 2007) 72 Fed. Reg. 31250 (June 6, 2007) 72 Fed. Reg. 45717 (Aug. 15, 2007) 72 Fed. Reg. 46023 (Aug. 16, 2007) Kenk’s amphipod, northern Virginia well amphipod, and a copepod 72 Fed. Reg. 51766 (Sept. 11, 2007) 72 Fed. Reg. 57278 (Oct. 9, 2007) 72 Fed. Reg. 59979 (Oct. 23, 2007) Overturned or settled as a result of litigation 70 Fed. Reg. 29253 (May 20, 2005) 71 Fed. Reg. 6241 (Feb. 7, 2006) 71 Fed. Reg. 51549 (Aug. 30, 2006) 71 Fed. Reg. 76057 (Dec. 19, 2006) 72 Fed. Reg. 23886 (Apr. 25, 2006) 71 Fed. Reg. 4092 (Jan. 25, 2006) 72 Fed. Reg. 7843 (Feb. 21, 2007) 72 Fed. Reg. 14865 (Mar. 29, 2007) 72 Fed. Reg. 57273 (Oct. 9, 2007) Mountain whitefish in the Big Lost River, Idaho 72 Fed. Reg. 59983 (Oct. 23, 2007) 71 Fed. Reg. 36743 (June 28, 2006) 72 Fed. Reg. 37695 (July 11, 2007) 72 Fed. Reg. 9913 (Mar. 6, 2007) Western Watersheds Project v. Norton, Civ. No. 06-127, 2007 WL 2827375 (D.Idaho Sept. 6, 2007). Forest Guardians v. Kempthorne, Civ. No. 06-02115 (D.D.C.), settlement filed June 29, 2007. Center for Biological Diversity v. Kempthorne, Civ. No. 07-0038, 2008 WL 659822 (D. Ariz. Mar. 6, 2008). Center for Biological Diversity v. United States Fish and Wildlife Service, Civ. No. 07-4347 (N.D. Cal.), settlement filed Feb. 21, 2008. Center for Biological Diversity v. Kempthorne, Civ. No. 06-04186, 2007 WL 163244, (N.D. Cal. Jan. 19, 2007). Western Watersheds Project v. Kempthorne, Civ. No. 07-00409 (D. Idaho), complaint filed Jan. 25, 2008. In April 2006, an anonymous complaint prompted the Department of the Interior’s (Interior) Office of Inspector General to begin investigating Deputy Assistant Secretary Julie MacDonald’s activities and her involvement with Endangered Species Act (ESA) decisions. On March 23, 2007, Interior’s Inspector General reported on its investigation of allegations that Ms. MacDonald was involved in unethical and illegal activities related to ESA decision making. The investigation did not reveal illegal activity but concluded that Ms. MacDonald violated federal rules by sending internal agency documents to industry lobbyists. On May 1, 2007, Ms. MacDonald resigned from her position as Deputy Assistant Secretary. congressional hearing titled Endangered Species Act Implementation: Science or Politics? (House Hearing No. 110-24). On May 22, 2007, Interior’s Deputy Secretary, Lynn Scarlett, directed Interior’s U.S. Fish and Wildlife Service (Service) Director Dale Hall to examine all work products that were produced by the Service, reviewed by Ms. MacDonald, and could require additional review because of her involvement. In response to the directive, the Service identified eight decisions for further review. The Service’s selection process for determining which ESA decisions were potentially inappropriately influenced by former Deputy Assistant Secretary MacDonald and the status of the Service’s review of these decisions. The types of decisions, if any, excluded from the Service’s selection process. Interviewed the Director of the Service, all eight regional directors, and key regional ESA staff. Conducted site visits, phone interviews, or both with ESA staff from 10 field offices in five regions that were actively engaged in ESA decision making. Reviewed documentation developed by Service headquarters, regions, and field offices about the selection process and the current status of the Service’s review. Reviewed Service policies and procedures for making ESA decisions and reviewed other species-specific documentation. threatened and endangered species and the ecosystems upon which they depend. The ESA requires listing a species as endangered if it faces extinction throughout all or a significant portion of its range and as threatened if it is likely to become endangered in the foreseeable future. The ESA has provisions to protect and recover species after they are listed, and it prohibits the “taking” of listed animal species. Many ESA decisions must be based, at least in part, on the best available scientific information. Interior is responsible for implementing the ESA for freshwater and terrestrial species. Interior has delegated many of its ESA responsibilities to the Service. Service staff at headquarters, regional, and field offices are largely responsible for implementing the various ESA provisions. Field office staff are generally responsible for initiating ESA decision- making actions; listing and critical habitat decisions are forwarded to regional and headquarters offices for review. Secretary for Fish and Wildlife and Parks for review; the Service Director generally approves final decisions. For critical habitat, the Service forwards its recommendations to Interior’s Office of Assistant Secretary for Fish and Wildlife and Parks, which applies economic, national security, and other factors before it approves a final determination. While in office from July 2002 until May 2007, Interior’s former Deputy Assistant Secretary MacDonald reviewed more than 200 ESA decisions. Dale Hall was sworn in on October 12, 2005, as Service Director. In February 2006, he met with Ms. MacDonald and other Interior officials about their review and involvement in the Service’s ESA decisions. investigation of allegations that Ms. MacDonald’s involvement resulted in the withdrawal of the Service’s decision to list the Sacramento splittail as threatened. The investigation concluded that Ms. MacDonald stood to gain financially by the decision and therefore should have recused herself. On November 30, 2007, Senator Wyden sent a letter to the Inspector General requesting an investigation of potential inappropriate involvement by Ms. MacDonald on 18 ESA decisions. Two more species were subsequently added to this investigation. directors to communicate Deputy Secretary Scarlett’s directive to examine decisions reviewed by Ms. MacDonald that could require revision because of her involvement. Director Hall delegated the selection process to the regional directors and asked that they consult their field offices. Director Hall said the selection process should include any type of ESA decision made during Ms. MacDonald’s time in office. The regions were given the month of June to select decisions for potential revision. Director Hall granted the regions considerable discretion in making their selections, deferring to them to submit decisions for potential revision. met to discuss decisions; in another, a systematic process was undertaken, including developing memos of instruction, reviewing decision files, and holding conference calls with field offices. Regional offices incorporated input from their field offices to varying degrees; a few interacted little or not at all with field staff in making their selections. Four of the eight regions reviewed documents from their decision files; many regional staff stated that they already knew which decisions might warrant revision without reviewing their records. The universe of decisions reviewed varied slightly by region: some regions reviewed decisions made through 2006; others reviewed decisions made during 2007. 1. Ms. MacDonald influenced the decision directly; 2. the scientific basis of the decision was compromised; and 3. the decision was significantly changed and resulted in a potentially negative impact on the species. At the end of the selection process, the regional offices discussed the results with Director Hall and submitted memos to the Director, listing 11 decisions for potential revision. One of the decisions, the Mexican garter snake, was subsequently withdrawn from the list after further discussion determined that the decision was made internally by Service headquarters. On July 12, 2007, Director Hall sent a memo to Deputy Secretary Scarlett reporting that 10 decisions submitted by the regions would be reviewed. and marbled murrelet—after determining that neither decision involved the inappropriate use of science, but rather involved policy interpretations. On July 20, 2007, Director Hall sent a memo to Deputy Secretary Scarlett revising the original list of decisions based on the region 1 withdrawals, changing the total from 10 to 8. Of the 8 decisions, 6 were critical habitat designations. Reduced acreage to about 1 percent of scientific recommendation Reduced range area by about half Reversed finding to “not substantial” Director Hall has stated that revising the decisions is a high priority. The Service has proposed amended rules for three decisions. The Service is planning to initiate one status review on or before May 1, 2008 and propose one revised critical habitat rule on or before August 29, 2008. The Service is determining time frames for addressing two other decisions. The Service is not planning to revise one decision because it concluded that the critical habitat designation represents a scientifically supportable and reasonable range for the species. Service actions to address decision Published an amended proposed critical habitat on November 28, 2007 (72 Fed. Reg. 67428). The Service and the Plaintiffs are negotiating a settlement agreement regarding a date for issuing proposed and final revisions of the critical habitat designation for this species Propose a revised critical habitat rule on or before August 29, 2008. Issue final revised critical habitat rule on or before August 31, 2009. Initiate a status review on or before May 1, 2008. Issue a 12-month finding on or before June 1, 2010. Withdrew proposed delisting and published an amended proposed listing rule on November 7, 2007 (72 Fed. Reg. 62992). Revisit critical habitat when listing is final and funds are available. Published a proposed rule describing revised critical habitat on February 28, 2008 (73 Fed. Reg. 10860). Following criterion 1, the Service excluded decisions reviewed by Interior officials other than Ms. MacDonald. While Ms. MacDonald was the primary reviewer of most ESA decisions, other Interior officials were also involved. Example: Miami blue butterfly The Service received a petition to list the Miami blue butterfly on an emergency basis and reviewed the species’ status to determine if such listing was warranted. After review, Service officials at all levels supported a recommendation for listing. Citing a Florida state management plan and existence of a captive-bred population, however, an Interior official besides Ms. MacDonald determined that emergency listing was not warranted, and the blue butterfly was designated as a candidate instead of a listed species. Following criterion 2, the Service excluded policy decisions that limited the application of science. Under Ms. MacDonald, several informal policies were established that influenced how science was to be used when making ESA decisions. Petition guidance: Service staff cited a practice whereby they were limited to using only the information contained in a petition when making a decision. They could, however, use information external to the petition if such information would support a decision that listing was not warranted. Recovery plans: A practice was developed that Service staff could generally not use or cite recovery plans when developing critical habitat designations. Defining occupancy: Under Ms. MacDonald, the ESA wording “occupied by the species at the time it is listed” was narrowly applied when designating critical habitat. After the Service proposed critical habitat for the bull trout, Ms. MacDonald questioned Service biologists’ conclusions about the species’ occupied habitat. As a result, some proposed critical habitat areas were removed, in part because occupancy by the species could not be ascertained. Following criterion 3, the Service excluded decisions that were changed but not significantly or to the point of negative impact on the species. 1. In some cases, decisions that already had been addressed by the courts were excluded from the Service’s selection process. 2. Decisions that could not be reversed were excluded from the Service’s selection process. Example: Palos Verdes blue butterfly Navy-owned land that was critical habitat for the Palos Verdes blue butterfly was exchanged after involvement by Ms. MacDonald in a section 7 consultation, and the habitat of the species’ last known wild population was destroyed by development. Had the habitat not already disappeared, Service field staff believe the decision would warrant revisiting. 3. In some cases, decisions were excluded from the Service’s selection process where revising the decision was determined to be an inefficient use of resources because it would not significantly alter the species’ recovery. Example: Spikedace and loach minnow Ms. MacDonald limited the fishes’ critical habitat to those areas that had been occupied within the previous 10 years, reducing the total area of critical habitat designated. Service staff did not believe the change would significantly alter the fishes’ recovery and therefore felt that revisiting the decision would not be an efficient use of resources. 4. Decisions were excluded from the Service’s selection process where it could not be conclusively determined that Ms. MacDonald changed the decision. Service staff cited instances where they believed that Ms. MacDonald had changed decisions, but because the documentation was not clear, it could not be determined for certain if the changes could be attributed to her. 5. Decisions that were implicitly attributed to Ms. MacDonald were excluded from the Service’s selection process. Service staff described a climate under Ms. MacDonald where they were continually questioned about their scientific reasoning; staff said they learned to anticipate what would be approved—primarily with regard to critical habitat designations—and wrote their decisions accordingly. 6. Decisions were excluded from the Service’s selection process where Ms. MacDonald did not change the final outcome but may have inappropriately affected supporting scientific information in the decision. After a federal court required the Service to re-evaluate the species’ threatened status, Ms. MacDonald raised concerns about a statistical approach the Service had applied in analyzing the species’ population. In the final decision, she edited information regarding the statistical analysis. Service staff said that these edits could make it harder to use the scientific analysis in the future. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of the Interior's (Interior) U.S. Fish and Wildlife Service (Service) is generally required to use the best available scientific information when making key decisions under the Endangered Species Act (ESA). Controversy has surrounded whether former Deputy Assistant Secretary Julie MacDonald may have inappropriately influenced ESA decisions by basing decisions on political factors rather than scientific data. Interior directed the Service to review ESA decisions to determine which decisions may have been unduly influenced. ESA actions include, among others, 90-day petition findings, 12-month listing or delisting findings, and recovery planning. The Service distributed informal guidance in May 2005 on the processing of 90-day petitions. Recovery plans generally must include recovery criteria that, when met, would result in the species being delisted. GAO examined three separate issues: (1) what types of decisions, if any, were excluded from the Service's review of decisions that may have been inappropriately influenced; (2) to what extent the Service's May 2005 informal guidance affected 90-day petition findings; and (3) to what extent the Service has, before delisting species, met recovery criteria. GAO interviewed Service staff, surveyed Service biologists, and reviewed delisting rules and recovery plans. Interior did not provide comments in time for them to be included in this testimony. Several types of decisions were excluded from the Service's review of decisions that may have been inappropriately influenced. Using the following selection criteria, the Service identified eight ESA decisions for potential revision: (1) whether Ms. MacDonald influenced the decision directly, (2) was the scientific basis of the decision compromised, and (3) did the decision significantly change and result in a potentially negative impact on the species. The Service excluded (1) decisions made by Interior officials other than Ms. MacDonald, (2) policy decisions that limited the application of science, and (3) decisions that were changed but not significantly or to the point of negative impact on the species. The Service's May 2005 informal guidance had no substantive effect on 90-day petition findings. In May 2005, Service headquarters distributed a guidance document via e-mail to endangered-species biologists that could have been interpreted as instructing them to use additional information collected to evaluate a 90-day petition only to refute statements made therein. GAO's survey of 90-day petition findings issued by the Service from 2005 through 2007 found that biologists used additional information collected to evaluate petitions to both support and refute claims made in the petitions, as applicable, including during the 18-month period when the May 2005 informal guidance was being used. However, GAO found that the Service faces various other challenges in processing petitions, such as making decisions within 90 days and adjusting to recent court decisions. None of the 90-day petition findings issued from 2005 through 2007 were issued within the desired 90-day time frame. During these years, the median processing time was 900 days, or about 2.5 years, with a range of 100 days to 5,545 days (over 15 years). Additionally, the Service faces several challenges in responding to court decisions issued since 2004. For example, the Service has not yet developed new official guidance on how to process 90-day petitions after the courts invalidated a portion of the prior guidance. Finally, of the eight species delisted because of recovery from 2000 through 2007, the Service determined that recovery criteria were completely met for five species and partially met for the remaining three species because some recovery criteria were outdated or otherwise not feasible to achieve. When the delistings were first proposed, however, only two of the eight species had completely met all their respective recovery criteria. Although the ESA does not explicitly require the Service to follow recovery plans when delisting species, courts have held that the Service must address the ESA's listing/delisting threat factors to the maximum extent practicable when developing recovery criteria. In 2006, GAO reported that the Service's recovery plans generally did not contain criteria specifying when a species could be recovered and removed from the endangered species list. Earlier this year, in response to GAO's recommendation, the Service issued a directive requiring all new and revised recovery plans to include criteria addressing each of the ESA's listing/delisting threat factors.
Medicare’s physician fee schedule includes payments for over 7,000 services, such as office visits, surgical procedures, and tests. Most services are defined as discrete and stand-alone in that they may be furnished independently of other services, but a small number of services are defined as supplemental because they are commonly furnished along with other primary services. Services under the Medicare fee schedule are described and defined by the AMA’s Current Procedural Terminology (CPT) Editorial Panel, and each service is assigned a five-digit identifier, or code. The CPT Editorial Panel revises and modifies CPT codes based largely on suggestions from specialty societies and the CPT Editorial Panel’s Advisory Committee. Code revisions require research from both CPT staff and specialty society members who assist the CPT Editorial Panel in its work. According to AMA officials, the CPT process generally takes about 14 months from the time potential codes are first identified by specialty societies to the final revision or development of a new code. CMS relies on the AMA/Specialty Society Relative Value Scale Update Committee (RUC)—an expert panel that includes members from national physician specialty societies—to develop and update on an ongoing basis the resource estimates upon which fees are based. Specialty societies identify services for review, gather data on resource use, and make proposals to the RUC on resource estimates for services. Physician work estimates are developed using vignettes of each service furnished to a typical patient, where the specific physician activities are described for three phases—before, during, and after the service. Practice expense estimates considered direct—clinical labor (that is, the nurse’s or technician’s time), equipment, and supplies—are developed similarly for each of these phases. (App. II provides an example of a vignette and practice expense estimates for one service.) The RUC evaluates proposals submitted by the specialty societies and makes recommendations for final consideration by CMS. The RUC meets three times a year, and, on average, reviews approximately 300 codes annually. The RUC also assists CMS in the Five-Year Review process—a review of fees for all services that the agency is required by law to conduct at least every 5 years to account for changes in medical practice. While CMS may reject or modify the RUC’s recommendations, from 1993 through 2009, the agency accepted over 90 percent of the recommendations pertaining to 3,600 new and revised CPT codes. CMS may at times also make changes to fees for services independent of RUC recommendations. Efficiencies in multiple services that are furnished together may be factored into fees primarily in two ways. First, the RUC and specialty societies generally attempt to consider whether other services are typically furnished along with the service they are reviewing to avoid duplication of the resources associated with physician work and practice expenses that may be incurred only once. For example, certain activities included in the practice expense component, such as preparing the patient before a procedure and cleaning the room after the procedure, are performed only once when two services are furnished together. However, the RUC has not reviewed every service; therefore, estimates are outdated for a large portion of services and may no longer reflect current technology and medical practice. For example, resource estimates for certain image-guided surgeries were developed when a surgeon performed the surgery and a radiologist performed the related imaging, whereas in current medical practice, a single physician tends to do both tasks. Further, for supplemental services, the RUC ensures that the physician work and practice expense resources required before and after the service are not duplicated. Second, CMS has, independent of the RUC and specialty societies, implemented its own policies to recognize efficiencies occurring in certain services. CMS has a long-standing policy called a multiple procedure payment reduction (MPPR) to avoid duplicate payments for portions of practice expenses that are incurred only once when two or more surgical services are furnished together by the same physician during the same operating session. CMS expanded the MPPR to include certain diagnostic imaging services in 2006. Under the MPPR policy, the full fee is paid for the more expensive service, but a reduction is applied to the fees for each subsequent service. Generally, a 50 percent reduction is applied to fees for surgical services performed during the same operating session and a 25 percent reduction is applied to fees for certain imaging services that are furnished together. By law, updates to fees are required to be budget neutral—that is, they cannot cause Medicare’s aggregate payments to physicians to increase or decrease by more than $20 million. As a result, any “savings” realized from reducing the fees for particular services do not accrue to the Medicare program but are redistributed across all services, resulting in a slight increase to the fees for all other services. In some instances, Congress has overridden budget neutrality to ensure that payment changes result in savings to Medicare. For example, through the Deficit Reduction Act of 2005 (DRA), Congress mandated that savings resulting from the MPPR for certain imaging services that were furnished together be exempted from budget neutrality. As a result, annual savings of approximately $96 million were not redistributed across all services, but accrued as savings to the Medicare program in 2006. CMS has taken steps to recognize efficiencies for services commonly furnished together through the use of the RUC process and the MPPR, but has not targeted services with the greatest potential for savings, and the RUC process depends on specialty societies. The MPPR is limited in scope because it does not apply to a broad range of services, nor does it capture efficiencies occurring in the physician work component. CMS stated that it is reviewing the efforts of a workgroup recently created by the RUC to identify efficiencies in services that are commonly furnished together. In March 2006 MedPAC criticized the RUC for recommending more increases than decreases in resource estimates, largely because the RUC had focused on services that specialty societies believed were undervalued. In response, the RUC established the Five- Year Review Identification Workgroup in October 2006 to identify potentially misvalued services. The workgroup used several criteria to identify these services, one of which was to examine services commonly furnished together to determine if such services should be bundled to reduce duplication in the physician work component. The workgroup requested data from CMS on services commonly furnished together in 2007. CMS forwarded a list of over 2,200 service pairs that were furnished together more than 50 percent of the time, but did not tell the workgroup how to prioritize its review of the services. Instead, the workgroup developed its own methodology, targeting service pairs that were almost exclusively furnished together. While the methodology represents a reasonable first step to identify potentially misvalued services, and the workgroup has expended considerable effort and resources in implementing it, the methodology will likely result in limited savings to Medicare. This is because the group did not systematically focus on services that accounted for a large share of Medicare spending, nor did it exclude supplemental services with limited potential for savings. The workgroup focused on service pairs in which the two services were performed together at least 90 percent of the time. The workgroup classified service pairs into two types: type A, in which both services in the pair were performed together at least 90 percent of the time, and type B, in which one service was performed with another service at least 90 percent of the time in a unidirectional relationship (that is, when the first service was performed, the second service was also performed at least 90 percent of the time, but when the second service was performed, the first service was not performed at least 90 percent of the time). The workgroup identified 22 type A and 31 type B service pairs where possible duplication was occurring in physician work. However, these service pairs would likely result in limited savings. First, 19 of the 22 type A pairs and 20 of the 31 type B pairs included supplemental services for which further reductions in fees would likely be small. For example, in performing a three-dimensional heart wall imaging study (also known as a myocardial perfusion imaging study), physicians may take additional measurements of blood flow or heart wall function. These additional services are supplemental to the primary service and are therefore already priced to exclude overlap in practice expenses incurred before and after the service. Second, spending for the lower-priced service in the remaining pairs was minimal: $27 million for the remaining 3 type A services and $117 million for the remaining 11 type B services. Thus, potential savings from combining the remaining service pairs would likely be no more than half these respective amounts, assuming a 50 percent discount was applied to the lower-priced service—a generous assumption, since that is the maximum discount that CMS has applied to services under the MPPR. Another limitation of the workgroup’s review of services commonly furnished together is that its process is resource intensive. This element is inherent in a process based on input and consensus from specialty societies. The workgroup follows the RUC’s process in that it solicits proposals from specialty societies for potential revisions to the service pairs. The proposals must then be approved by the CPT Editorial Panel, the RUC, and CMS (see fig. 1). To date, the workgroup has identified only a limited number of misvalued services commonly furnished together. Since the review of service pairs that was started in 2007, the workgroup has identified three misvalued services; at the workgroup’s recommendation, these (echocardiography) services were combined into a single code in 2009. The earliest any additional changes might be implemented for the type A and B service pairs first identified in 2007 would be 2010. Finally, the workgroup is required to undertake other tasks, including reviewing services because of technological changes or because of high growth, utilization, or intensity. These reviews also require involvement from the specialty societies, in addition to their efforts to revise estimates of physician work and practice expenses an ongoing basis as well as for the Five-Year Reviews. Despite the demands of these tasks, the RUC has stated that CMS should continue to rely on the workgroup to identify opportunities for efficiencies, rather than implement an MPPR, which it perceives to be an imprecise tool for reducing duplicate payments for portions of services furnished only once. CMS’s MPPR policy reflects efficiencies for certain imaging and surgical procedures commonly furnished together, but it is limited in scope. CMS estimated that its use of the MPPR for certain imaging procedures produced savings of about $96 million in 2006. In this instance, Congress exempted these savings from the budget neutrality provision; as a result, the $96 million was not redirected to other services but accrued as savings to the Medicare program. In principle, an MPPR can be implemented quickly to reflect efficiencies for services performed together. In developing the list of services to be selected for an MPPR, CMS does not formally solicit opinion from specialty societies or others until the MPPR is published as a proposed rule. For example, in developing the imaging MPPR, CMS—acting independently of the RUC and specialty societies, on MedPAC’s recommendation—identified imaging services that were commonly furnished together and determined an appropriate discount to account for efficiencies occurring in the practice expense component. CMS then published these decisions in its August 2005 proposed rule for specialty society and public comment and finalized its decisions in November 2005 after evaluating and responding to stakeholder comments. These changes went into effect on January 1, 2006. The MPPR as currently used by CMS does have limitations. First, the MPPR does not apply to nonsurgical and nonimaging services that are commonly furnished together. When CMS developed the MPPR for surgical services in 1996, it acknowledged that efficiencies likely also occur for nonsurgical services. However, other than the imaging MPPR, CMS has not implemented an MPPR policy for nonsurgical services. Contractors we interviewed identified many opportunities to expand the MPPR policy to areas where services are commonly furnished together. For example, they stated that similar efficiencies occur when certain types of tests—such as nerve conduction studies or pulmonary function, vision, and hearing tests—are performed together. However, as of July 2009, CMS had not published proposals to systematically review services commonly furnished together by focusing on the most expensive services with the greatest potential for savings to Medicare. Second, the MPPR only reflects efficiencies occurring in practice expenses, not in the physician work component, where certain physician activities may occur only once. For example, a physician’s review of a patient’s medical history and prior imaging or other test results before the service, and dictation of the final report for the medical record, occur only once. Under the current payment methodology, the time spent on these activities is included in each service because the services are assumed to be furnished separately. Several organizations we interviewed stated that an MPPR for the physician work component was warranted to avoid duplicate payments to physicians for activities that they perform only once. In its 2006 report, MedPAC similarly recommended that CMS examine efficiencies that might be occurring in the physician work component but are not reflected in the fee schedule. However, CMS has not conducted such a review. Our review of Medicare claims data indicated the potential for reducing excessive physician payments by implementing an MPPR to reflect efficiencies generally occurring in the practice expense component of certain nonsurgical and nonimaging service pairs commonly furnished together. In addition, our analysis of certain imaging services indicated potential for further reducing excessive payments by implementing an MPPR to reflect efficiencies in the physician work component when these services are performed together. Our systematic review of a sample of the most costly service pairs showed potential for annual savings of over one-half billion dollars with implementation of an MPPR to reflect efficiencies in the practice expense component. Contractor Medical Directors we met with determined that an MPPR was appropriate for 149 (over 40 percent) of the 350 most costly service pairs we reviewed with them. The contractor Medical Directors recommended these MPPRs to reflect efficiencies occurring in practice expenses for services that were furnished only once. The 149 service pairs included interventional radiology procedures, physical therapy services, and various tests, such as additional imaging, pulmonary function, vision, hearing, and pathology. For example, a cardiovascular stress test is commonly furnished with a three-dimensional heart imaging test. However, the Medical Directors cautioned that CMS would need to carefully monitor utilization of these services to ensure that physicians did not change their behavior by scheduling services on different days to avoid reduced fees for those subject to an MPPR. Our analysis of 118 imaging service pairs suggests that efficiencies in physician work occur when services are furnished together, and an MPPR policy that reflected these efficiencies could save Medicare over $175 million annually. We sought the advice of contractor Medical Directors and other experts, who agreed that efficiencies occur in physician work when two or more services are furnished together and that an MPPR would be appropriate to account for these efficiencies. Our savings estimate is based on reducing fees for the lower-priced service in each service pair to reflect efficiencies in physician time spent on activities performed before and after the service that are already included in the higher-priced service. For example, the service pair that accounted for the largest share of spending across all imaging service pairs was the physician’s interpretation of two computed tomography (CT) scans: CT of the abdomen with dye and CT of the pelvis with dye. Of a total of 18 minutes allotted for interpretation of the second (lower-priced) service, 8 minutes were allotted for activities such as reviewing the patient’s prior medical history before the service and reviewing the final report and following up with the referring physician after the service. Since time spent on these activities was already included in the first (higher-priced) service, we discounted the fee for the lower-priced service by 44 percent (that is, 8 minutes ÷ 18 minutes). While the results of our analysis cannot be generalized to all service pairs, the concept of applying an MPPR for the physician work component could be applied to other services. Our analysis focused on efficiencies in activities performed before and after each service, but there are also likely efficiencies occurring during, or within, the intraservice phase. For example, a practicing radiologist we interviewed stated that when two CT scans of contiguous body areas (e.g., the abdomen and pelvis) are taken at the same time, the total number of actual CT images reviewed is lower than if each scan were performed separately. This is because an abdominal CT generally includes margins of the pelvis and vice versa, and the images of these overlapping margins are examined only once by the radiologist. Other efficiencies relating to technology advances, such as digital storage and retrieval of imaging, may also be realized during the intraservice phase. The RUC and specialty societies may be limited in their ability to help CMS quickly identify opportunities for further savings from efficiencies occurring when services are commonly furnished together. The RUC’s methodology for identifying additional services is not focused on finding savings for the Medicare program. Moreover, the RUC workgroup’s dependence on specialty societies limits its ability to make progress. CMS, on the other hand, has the tools in place to readily expand its MPPR policy to reflect efficiencies occurring in the practice expense and physician work components of services that are commonly furnished together. However, as of July 2009, the agency did not appear to have conducted a systematic review of claims data to identify opportunities with the greatest potential for further savings. Further, unless specifically exempted by Congress (as was done in the DRA for fee changes for certain imaging services), savings would be redistributed to other services in accordance with the budget neutrality provision, and the Medicare program would not realize savings. The Acting Administrator of CMS should take further steps to ensure that fees for services paid under Medicare’s physician fee schedule reflect efficiencies that occur when services are performed by the same physician to the same beneficiary on the same day. These efforts could include systematically reviewing services commonly furnished together and implementing an MPPR to capture efficiencies in both physician work and practice expenses, where appropriate, for these services; focusing on service pairs that have the most impact on Medicare spending; and monitoring the provision of services affected by any new policies it implements to ensure that physicians do not change their behavior in response to these policies. To ensure that savings are realized from the implementation of an MPPR or other policies that reflect efficiencies occurring when services are furnished together, Congress should consider exempting these savings from budget neutrality. We obtained written comments on a draft of this report from the Department of Health and Human Services (HHS), which are reprinted in appendix III. We obtained oral comments from representatives of the AMA. HHS concurred with our recommendation and stated that CMS plans to perform an analysis of nonsurgical codes that are furnished together between 60 and 70 percent of the time to determine whether efficiencies occur in the physician work and practice expense component of these services. HHS stated that it would implement policies to reflect these efficiencies, as appropriate, and agreed that CMS should focus on service pairs that have the most impact on Medicare spending. HHS also agreed on the need to monitor physician utilization of services if the MPPR is expanded. HHS suggested that we include in an appendix to the report the specific service pairs that we identified. We did not include such an appendix because our report focuses on illustrating the value of CMS’s taking a more systematic approach, rather than focusing on specific service pairs, to ensure that the fee schedule reflects efficiencies when services are provided together. However, we will work with CMS officials and share information to aid in the agency’s efforts. AMA representatives expressed three broad concerns about the draft report. First, they disagreed with our assessment of the RUC workgroup’s efforts to ensure that services are appropriately coded and valued. Second, they stated that a broad application of the MPPR to account for efficiencies in practice expenses and physician work was not appropriate. Third, they opposed our matter for congressional consideration that suggests that any savings from implementing the report’s recommendations be exempted from budget neutrality requirements. AMA representatives disagreed with the report draft’s characterization of the efficacy of the RUC workgroup, noting that the RUC workgroup’s efforts have been aggressive, timely, and efficient. They also stated that the specialty societies had developed proposals to combine the type A and B service pairs that would result in significant savings should CMS implement them in 2010 or 2011. As an example, they projected that the proposals to combine 14 myocardial perfusion services of the workgroup’s 53 type A and type B service pairs would result in annual savings of about $40 million from efficiencies occurring in the physician work component. In addition, they said that while they did not have an estimate, they believed that savings for the practice expense component would also likely be significant. Finally, representatives stated that in its review of potentially misvalued services, the workgroup may have already identified and made recommendations on some of the unique codes or pairs included in our list of 149 code pairs. We acknowledge in the draft the time and effort the workgroup has expended in identifying potentially misvalued services. However, based on our review of the workgroup’s processes and progress to date, we continue to believe that these processes are resource intensive and will likely limit CMS’s ability to quickly identify opportunities for savings from those service pairs that account for a high share of Medicare spending. In addition, as stated in the draft, the workgroup has not prioritized its review to systematically focus on services with the greatest potential savings for Medicare. While it is possible that some of the type A and type B service pairs the workgroup identified may be relatively costly, its methodology does not systematically focus on such services. We believe our assessment of the workgroup’s progress remains accurate—as of 2009 the workgroup had identified only three misvalued services that were combined. Finally, from our list of 149 code pairs (which included 116 unique codes), the workgroup had identified only one code pair and 21 unique codes in its review of potentially misvalued codes. AMA representatives stated that a “blanket reduction” of 25 percent for the 149 code pairs based on duplication in time spent on certain preservice and postservice tasks was not appropriate. They contended that for an average service, the intensity of time spent on tasks in the preservice and postservice phases is less than the intensity of time spent on intraservice tasks. AMA representatives added that in some instances a 25 percent reduction may be too high, whereas in other instances it might be more appropriate. They said that for some of the newer codes, the RUC had already taken any potential efficiencies into account, but for some of the old codes, which have not been revalued by the RUC, the 25 percent discount may be more reasonable. The AMA representatives also stated that the RUC workgroup’s efforts result in a more accurate and credible system of coding and valuation of services and thus is more effective than the application of “arbitrary policies” such as an MPPR. In the draft report, we acknowledge the limitations of our approach and state that the results of our analysis cannot be generalized to all service pairs. Our draft also states that the discount of 25 percent we applied to the 149 code pairs is consistent with the imaging MPPR that reflects efficiencies in the practice expense component. We do not recommend that CMS adopt our specific methodology; rather we present it as an illustration of potential efficiencies occurring in the physician work component that can be uncovered through a systematic review of service pairs. However, we continue to believe that CMS should undertake a systematic review of services and, where appropriate, expand the MPPR to ensure that physician fee schedule payments reflect efficiencies when services are performed by the same physician to the same beneficiary on the same day. AMA representatives disagreed with the draft’s statement that spending on physician services has recently grown at an average annual rate of 6 percent, and opposed our suggestion that Congress consider exempting any savings from implementation of the report’s recommendations from federal budget neutrality requirements. AMA representatives told us that the growth rate of per beneficiary spending on Part B physician services has slowed to an annual rate of 3 percent in 2006 and 2007. Regarding our suggestion that Congress consider exempting any savings from budget neutrality, AMA representatives expressed concern that the exemption would have an adverse effect on primary care services that could benefit from the redistribution of savings and stated that savings would be spent on other programs. We agree that the annual rate of growth in per beneficiary spending on physician services slowed somewhat in 2006 and 2007, but even taking this into account, annual spending from 1997 to 2008 grew an average of 6 percent. We recommend that Congress consider exempting potential savings from budget neutrality to help ensure the fiscal health of the Medicare program. As we noted in the draft, there is recent precedent for exempting savings from budget neutrality. We agree that primary care services are important, but Congress has other mechanisms for altering payment for these services. AMA representatives also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Acting Administrator, CMS, and relevant congressional committees. This report also will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In this appendix, we describe the processes we used to determine opportunities for the Centers for Medicare & Medicaid Services (CMS) to avoid excessive payments for services commonly furnished together. To determine additional opportunities for CMS to avoid excessive payments for services that are commonly furnished together, we conducted a systematic review of Medicare claims data using the 2006 Medicare Physician/Supplier Part B 5 Percent Standard Analytic File. To conduct this review, we selected physician services that were paid under the resource-based payment methodology. We generated a list of all service pairs that were furnished by the same physician to the same beneficiary on the same day and made the following exclusions: service pairs with low utilization—those that were billed fewer than 5,000 service pairs containing only the professional portion of a service; service pairs that were already subject to payment policies that reduced payments for one of the services in the pair; service pairs containing supplemental services, which are priced to exclude duplication of physician work and practice expenses that are already included in the primary service; and service pairs containing duplicate services. The remaining list of service pairs was our universe of pairs that represented opportunities for savings from efficiencies that resulted when the two services were furnished together. To target our review to the service pairs that accounted for a large share of Medicare spending, we ranked the service pairs based on spending for the lesser-priced service (since the multiple procedure payment reduction (MPPR) and other policies usually apply to that service) and selected the 350 costliest service pairs based on total spending. We met with contractor Medical Directors and their staffs in five different states to determine if there were efficiencies taking place in the practice expense component when these service pairs were furnished together. To ensure consistency of review across the five contractors, we developed a standard set of questions that each contractor followed in evaluating the service pairs. We asked contractors to examine service descriptions and definitions, as well as coding instructions from the Current Procedural Terminology (CPT) manual and from CMS, and use their clinical judgment and knowledge to assess whether there were efficiencies occurring because certain practice expenses were incurred only once before and after each service in the service pairs. We also asked contractors to determine the payment policy that best captured these efficiencies. For example, contractors determined whether the services in each pair should be combined into a single code, there should be no payment for one service in the service pair because it was inherently included in the other, or an MPPR should be applied. If an MPPR should be applied, contractors determined the approximate discount that was most appropriate. Since all five contractors determined that an MPPR was the most appropriate payment policy to reflect efficiencies in all 149 of the 350 service pairs they identified as having potential, we estimated total savings to the Medicare program by applying the appropriate discount to spending for the lower-priced service in each pair. Our estimate of savings is conservative for several reasons. First, we excluded services that were billed multiple times on the same day by the same physician, since our focus was on potential savings when two unique services were furnished together. To the extent that there is overlap of physician work and practice expenses in the preservice and postservice phases of these duplicate services, an MPPR should be applied to account for this overlap. Second, we generally applied a discount of 25 percent or less to the service pairs to mirror CMS’s discount on imaging service pairs, although, in certain instances, a higher discount was warranted based on the extent of duplication in practice expenses. To estimate potential savings from applying an MPPR to account for duplication of physician work activities occurring before and after each service in the service pairs, we first examined the American Medical Association (AMA) database—the Resource-Based Relative Value System (RBRVS) Data Manager—to determine if data on these activities were available for all service pairs. The RBRVS Data Manager contains vignettes describing the physician’s work for a specific procedure for a typical patient in three phases: preservice, intraservice, and postservice. The AMA/Specialty Society Relative Value Scale Update Committee (RUC) bases its estimates of physician work and practice expenses on these vignettes. Because we found that vignettes were missing for a large proportion of services, we used physician time—the amount of time it takes a physician to perform a service—as a proxy for physician work, and discounted the fee for the lesser-priced service in each service pair for the extent of overlap in physician time spent on the preservice and postservice phases across the two services. Using the physician time file on the CMS Web site, we calculated this discount as the sum of time spent on the preservice and postservice phases of the lesser-priced service divided by total time for that service. We limited our analysis to the imaging service pairs that we had identified from our review of Medicare claims data because we wanted to examine a homogenous group of services where the activities included in the pre- and postservice phases were generally the same across different imaging services, and therefore the time spent on pre- and postservice phases was also likely to be relatively uniform across this group of services. We applied the discount to the professional fee of imaging services, since the professional fee captures the physician’s work in interpreting the imaging service. We discussed our approach with several experts in the Medicare physician payment system. These included an experienced contractor Medical Director; a Medicare Payment Advisory Commission (MedPAC) official who is an expert in Medicare physician payment policy; and a practicing radiologist and leading expert in the field who has written extensively on Medicare payment policy and reimbursement issues. They concurred that our methodology was a reasonable approach to estimating potential savings from an MPPR for physician work. This appendix contains examples of a vignette and a practice expense estimate. The vignette (fig. 2) is used by specialty societies to develop estimates of physician work resources for a service. The practice expense estimate (fig. 3) describes the nonphysician clinical labor, supplies, and equipment resources required for each service. In addition to the contact named above, Phyllis Thorburn, Assistant Director; William A. Crafton; Iola D’Souza; Richard Lipinski; and Elizabeth T. Morrison made key contributions to this report.
Medicare's physician fees may not always reflect efficiencies that occur when a physician performs multiple services for the same patient on the same day, and some resources required for these services do not need to be duplicated. In response to a request from Congress, GAO examined (1) the Centers for Medicare & Medicaid Services' (CMS) efforts to set appropriate fees for services furnished together and (2) additional opportunities for CMS to avoid excessive payments when services are furnished together. GAO examined relevant policies, laws, and regulations; interviewed CMS officials and others; and analyzed claims data to identify opportunities for further savings. CMS has taken steps to ensure that physician fees recognize efficiencies that occur when certain services are commonly furnished together, that is, by the same physician to the same beneficiary on the same day, but has not targeted services with the greatest potential for savings. CMS is reviewing the efforts of a workgroup created by the American Medical Association/Specialty Society Relative Value Scale Update Committee (RUC) in 2007 to examine potential duplication in resource estimates for services furnished together. However, the RUC workgroup has not focused on services that account for the largest share of Medicare spending. For this and other reasons, its methodology to identify and review services furnished together likely will result in limited savings. The workgroup's process is also resource intensive because it depends on input and consensus from specialty societies. Independent of the RUC, CMS has implemented a multiple procedure payment reduction (MPPR) policy for certain imaging and surgical services when two or more related services are furnished together. Under an MPPR, the full fee is paid for the highest-priced service and a reduced fee is paid for each subsequent service to reflect efficiencies in overlapping portions of the practice expense component--clinical labor, supplies, and equipment. For example, a nurse's time preparing a patient for a medical procedure or technician's time setting up the required equipment is incurred only once. The MPPR produced savings of about $96 million in 2006 for imaging services. However, the scope of the policy is limited because the policy does not apply to nonsurgical and nonimaging services commonly furnished together, nor does it specifically reflect efficiencies occurring in the physician work component--the financial value of a physician's time, skill, and effort. For example, when two services are furnished together, a physician reviews a patient's medical records once, but the time for that activity is generally reflected in fees paid for both services. CMS has additional opportunities to reduce excess physician payments that can occur when services are furnished together and Medicare's fees do not reflect the efficiencies realized. GAO's review found that expanding the MPPR to reflect practice expense efficiencies that occur when nonsurgical, nonimaging services are provided together could reduce payments for these services by an estimated one-half billion dollars annually. GAO's review also indicated that expanding the existing MPPR policy to reflect efficiencies in the physician work component of certain imaging services could reduce these payments by an estimated additional $175 million annually. Under the budget neutrality requirement, by law, savings from reductions in fees are redistributed by increasing fees for all other services. Thus, these potential savings would accrue as savings to Medicare only if Congress exempted them from the budget neutrality requirement, as was done in the Deficit Reduction Act of 2005 for savings from the changes to certain imaging services fees.
The Broadcasting Board of Governors oversees the efforts of all nonmilitary international broadcasting, which reaches an estimated audience of more than 100 million people each week in more than 125 markets worldwide. The Board manages the operations of the International Broadcasting Bureau (IBB), VOA, the Middle East Television Network (Alhurra and Radio Sawa), RFE/RL, and Radio Free Asia (RFA). In addition to serving as a reliable source of news and information, VOA is responsible for presenting U.S. policies through a variety of means, including officially labeled government editorials. Radio/TV Marti, RFE/RL, and RFA were created by Congress to function as “surrogate” broadcasters, designed to temporarily replace the local media of countries where a free and open press does not exist. Created by the Bush administration and the Board, the Middle East Television Network draws its mission from the core purpose of U.S. international broadcasting, which is to promote and sustain freedom by broadcasting accurate and objective news and information about the United States and the world to audiences overseas. In addition to the stand-alone entities that make up U.S. international broadcasting, Congress and the Board have created other broadcast organizations to meet specific program objectives. Congress created Radio Free Iraq, Radio Free Iran, and Radio Free Afghanistan and incorporated these services into RFE/RL’s operations. Under its new strategic approach to broadcasting, the Board and the Bush administration created Radio Sawa, the Afghanistan Radio Network (ARN), Radio Farda, and Alhurra to replace poorly performing services, more effectively combine existing services, and create new broadcast entities where needed. Figure 1 illustrates the Board’s current organizational structure. VOA, RFE/RL, and RFA are organized around a collection of language services that produces program content. In some countries, more than one entity broadcasts in the same language. These overlapping services are designed to meet the distinct missions of each broadcast entity. Currently, 42 of the Board’s 74 language services (or 57 percent) target the same audiences in the same languages. While some degree of overlap is to be expected given the varying missions of the broadcast entities, the Board has concluded that this level of overlap requires ongoing analysis and scrutiny. The Board’s budget for fiscal year 2003 was approximately $552 million, with nearly half of its resources used to cover transmission, technical support, Board and IBB management staff salaries, and other support costs. Among the broadcast entities, funds are roughly equally divided among VOA and the four other U.S. broadcasting entities. Figure 2 provides a breakout of the Board’s fiscal year 2003 budget. Our reviews of U.S. international broadcasting reveal that the Board faces the challenges of operating a mix of broadcast entities with varying missions and structures in an environment that provides significant marketing obstacles. As we reported in July 2003, the Board has adopted a new approach to broadcasting that is designed to overcome several of these challenges. The Board’s key organizational challenge is the disparate mix of broadcast entities it is tasked with managing. To address this problem, the Board has adopted a “single system” approach to broadcasting whereby broadcast entities are viewed as content providers and the Board assumes a central role in tailoring this content to meet the demands of individual markets. The Board also faces marketing challenges that include the lack of a unique reason for listeners to tune in, the general lack of target audiences within broadcast markets, and poor-to-fair signal quality for many of the broadcast services. Recent initiatives such as Radio Sawa and Alhurra have addressed these deficiencies, and the Board has required that all broadcast services, to the extent feasible, address these issues as well. The Board’s major organizational challenge is the need to further consolidate and streamline its operations to better leverage existing resources and generate greater program impact in priority markets. According to the Board’s strategic plan, “the diversity of the Broadcasting Board of Governors—diverse organizations with different missions, different frameworks, and different constituencies—makes it a challenge to bring all the separate parts together in a more effective whole.” As noted in our 2003 report, senior program managers and outside experts with whom we spoke supported considering the option of consolidating U.S. international broadcasting efforts into a single entity. The Board intends to create a unified broadcasting system by treating the component parts of U.S. international broadcasting as a single system. Under this approach, VOA and other U.S. broadcast entities are viewed as content providers, and the Board’s role is to bring this content together to form new services or entities as needed. The single-system approach to managing the Board’s diversity requires that the Board actively manage resources across broadcast entities to achieve common broadcast goals. A good example of this strategy in action is Radio Farda, which combined VOA and RFE/RL broadcast content to produce a new broadcast product for the Iranian market. In the case of Radio Sawa, the Board replaced VOA’s poorly performing Arabic service with a new broadcast entity. The Board’s experience with implementing Radio Sawa suggests that it can be difficult to make disparate broadcast entities work toward a common purpose. For example, Board members and senior planners told us they encountered some difficulties attempting to work with officials to launch Radio Sawa within VOA’s structure and were later forced to constitute Radio Sawa as a separate grantee organization. While this move was needed to achieve the Board’s strategic objectives, it contributed to the further fragmentation of U.S. international broadcasting. The Board’s strategic plan comments openly on the marketing challenges facing U.S. international broadcasters, specifically that many language services lack a unique reason for listeners or viewers to tune in; few language services have identified their target audiences—a key first step in developing a broadcast strategy; many language services have outmoded formats and programs with an antiquated, even Cold War, sound and style; and three-quarters of transmitted hours have poor or fair signal quality. Consistent with its “Marrying the Mission to the Market” philosophy, the Board has sought to address these deficiencies in key markets with new initiatives in Afghanistan, Iran, and the Middle East that support the war on terrorism. The first project under the new approach, Radio Sawa (recently added to the new Middle East Television Network), was launched in March 2002 using many of the modern, market-tested broadcasting techniques and practices prescribed in its strategic plan, including identifying a target audience, researching the best way to attract the target audience, and delivering programming to the Middle East in a contemporary and appealing format. The Board’s other recent initiatives also have adhered to this new approach by being tailored to the specific circumstances of each target market. These initiatives include the Afghanistan Radio Network, Radio Farda service to Iran, and the Alhurra satellite service to the Middle East. Table 1 describes the Board’s recent projects that support the war on terrorism. Although we have not validated available research data, the Board claims that implementation of these marketing improvements has led to dramatic increases in audience listening rates. For example, based on surveys conducted by ACNielsen, the Board maintains that Radio Sawa is now the number one international broadcaster in six countries in the Middle East, reaching an average weekly audience of about 38 percent of the general population and about 49 percent of its 15- to 29-year-old target audience across all six countries. These levels far exceed the 1 to 2 percent audience reach of the VOA Arabic service, which Radio Sawa replaced. In addition, the Board’s main research contractor—Intermedia—has indicated that as of March 2004, Radio Farda is the leading international broadcaster in Iran—achieving an average weekly listenership of 15 percent, which is 10 percent more than the combined weekly audiences for VOA and RFE/RL’s prior services to Iran. Board officials have told us that preliminary audience reach data for the Board’s satellite channel Alhurra will be available by June of this year. While the audience numbers on Radios Sawa and Farda appear to be very positive, as we reported in July 2003, U.S. broadcasters suffer from a credibility problem. To address this issue, we recommended that the Board adopt measures of broadcaster credibility, which the Board has recently implemented. In addition to these new initiatives, the Board has tasked all language services with adopting the tenets of its new approach, such as identifying a target audience and improving signal quality, to the maximum extent possible within existing budget constraints. They hope that these improvements will lead to significant audience boosts for a number of higher- and lower-priority services that suffer from very low listening rates. For example, data from the Board’s 2003 language review show that more than one-quarter of all language services had listening rates of fewer than 2 percent at that time. The Board manages its limited resources through its annual language service review process, which is used to address such issues as how resources should be allocated among services based on their priority and impact, how many broadcast services should be carried, what degree of overlap and content duplication should exist among services, and whether services should be eliminated because they have fulfilled their broadcast mission. This process responds to the congressional mandate that the Board periodically review the need to add and delete language services. The Board has interpreted this mandate to include the expansion and reduction of language services. Since 1999, the Board has identified more than $50 million in actual or potential savings through the language service review process by moving resources from lower- to higher-priority services, by eliminating language services, and by reducing language service overlap and transmission costs. As noted in our July 2003 report, the Board’s strategic plan concludes that if U.S. international broadcasting is to become a vital component of U.S. foreign policy, it must focus on a clear set of broadcast priorities. The plan notes that trying to do too much at the same time fractures this focus, extends the span of control beyond management capabilities, and siphons off precious resources. As discussed in our report, the Board determined that current efforts to support its broadcast languages are “unsustainable” with current resources, given its desire to increase impact in high-priority markets. Our survey of senior program managers revealed that a majority supported significantly reducing the total number of language services and the overlap in services between VOA and the surrogate broadcasters. We found that 18 of 24 respondents said that too many language services are offered. When asked how many countries should have more than one U.S. international broadcaster providing service in the same language, 23 of 28 respondents said this should occur in only a few countries or no countries at all. The Board’s annual language service review process serves as the Board’s principal tool for managing these complex resource questions. This process has evolved into an intensive program and budget review that culminates with ranked priority and impact listings for each of the Board’s 74 language services. These ranked lists become the basis for proposed language service reductions or eliminations and provide the Board with an analytical basis for making such determinations using measures of U.S. strategic interests, audience size, press freedom, and a host of other factors. Since the first language service review process began in 1999 and up through 2002, the Board has reduced the scope of operations of over 25 language services based on their priority and impact rankings and reallocated about $19.7 million to help fund higher-priority broadcast needs such as Radio Sawa and Radio Farda. As discussed in our February 2004 report, a clear example of the language service review process in action was the Board’s recent proposal to eliminate 17 Central and Eastern European language services which served to reduce the overall number of language services and eliminate several overlapping services where the Board believed each broadcast entity’s mission had been completed. This decision resulted in nonrecurring budget savings of about $8.8 million for fiscal year 2004 and recurring annual savings of about $12.1 million. Our only criticism of this decision was that the Board’s language service review process did not include a measure of press freedom that gauges whether the press acts responsibly and professionally. This is a significant omission in the Board’s current measure, given the congressional concern that RFE/RL’s broadcast operations not be terminated until a country’s domestic media meet this condition. Board officials acknowledged that their existing press freedom measure could be updated to include information on media responsibility and professional quality, and work is under way to develop a more comprehensive measure for the Board’s 2004 language service review. In our September 2000 report, we cited the Board’s concerns about overlapping language services and its plans to address this issue in subsequent iterations of the language service review process. In our July 2003 report we again raised the issue of language service overlap and content duplication between VOA and the surrogates. We also noted that while the Board’s strategic plan identified overlap as a challenge, it failed to answer questions about when it is appropriate to broadcast VOA and surrogate programming in the same language. The Board has responded to our observations and recommendations by incorporating a review of overlapping services in its language service review process for 2003. The Board developed several approaches to dealing with overlap. For example, services can be “merged” by having one service subsume another (as was the case with Radio Farda). A second approach is to run alternating services, as is the case with the Afghanistan Radio Network, which runs VOA and RFE/RL programming on a single broadcast stream. Another approach is to simply terminate one or both overlapping services. All of the Board’s overlapping services were assessed with these different approaches in mind. As a result of this analysis, the Board identified an estimated $4.9 million in fiscal year 2004 and 2005 savings from overlap services that could be redirected to higher- priority broadcasting needs, such as expanded Persian language television for Iran and expanded Urdu language radio for Pakistan. Mr. Chairman, the Board has revised its strategic planning and performance management system to respond to the recommendations in our July 2003 report aimed at improving the measurement of its results. In that report, we recommended that the Board’s new strategic plan include a goal designed to gauge progress toward reaching significant audiences in markets of strategic interest to the United States. Our report also recommended that the Board establish key performance indicators relating to the perceived credibility of U.S. broadcasters, whether audiences are aware of U.S. broadcast offerings in their area, and whether VOA is achieving its mission of effectively explaining U.S. policies and practices to overseas audiences. In response to our recommendation for a goal that would measure progress in reaching large audiences in markets of strategic interest to the United States, the Board replaced the seven strategic goals in its plan with a single goal focused on this core objective. The goal is supported by a number of performance indicators (at the entity and language service level) that are designed to measure the reach of U.S. international broadcasting efforts and whether programming is delivered in the most effective manner possible. Weekly listening rates at the entity level and target audience numbers by language service provide key measures of the Board’s reach. Other program effectiveness measures include program quality, the number of broadcast affiliates, signal strength, Internet usage, and cost per listener. In response to our recommendation for a measure of broadcaster credibility to identify whether target audiences believe what they hear, the Board added such a measure to its performance management system. Reaching a large listening or viewing audience is of little use if audiences largely discount the news and information portions of broadcasts. Our survey of senior program managers and discussions with Board staff and outside groups all suggest the possibility that U.S. broadcasters (VOA in particular) suffer from a credibility problem with foreign audiences, who may view VOA and other broadcasters as biased sources of information. InterMedia, the Board’s audience research contractor, told us that it was working on a credibility index for another customer that could be adapted to meet the Board’s needs and, when segmented by language service, would reveal whether there are significant perception problems among key target audiences. However, to develop a similar measure, Intermedia told us that the Board would need to add several questions to its national survey instruments. In response to our finding that the Board lacked a measure of audience awareness, the Board has added such a measure to its performance management system. We determined this measure would help the Board answer a key question of effectiveness: whether target audiences are even aware of U.S. international broadcasting programming available in their area. Board officials have stated that this measure would help the Board understand a key factor in audience share rates and what could be done to address audience share deficiencies. We found that the Board could develop this measure because it already collects information on language service awareness levels in its audience research and in national surveys for internal use. Finally, in response to our finding that the Board lacked a measure of whether target audiences hear, understand, and retain information broadcast by VOA on American thought, institutions, and policies, Board officials we spoke with told us that they are currently developing this measure for inclusion in the Board’s performance management system. The unique value-added component of VOA’s broadcasting mission is its focus on issues and information concerning the United States, our system of government, and the rationale behind U.S. policy decisions. Tracking and reporting these data are important in determining whether VOA is accomplishing its mission. Officials from the Board’s research firm noted that developing a measure of this sort is feasible and requires developing appropriate quantitative and qualitative questions to include in the Board’s ongoing survey activities. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For future contacts regarding this testimony, please call Jess Ford or Diana Glod at (202) 512-4128. Individuals making key contributions to this testimony included Janey Cohen, Melissa Pickworth, Addison Ricks, and Michael ten Kate. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The terrorist attacks of September 11, 2001, were a dramatic reminder of the importance of cultivating a better understanding of the United States and its policies with overseas audiences. U.S. public diplomacy activities include the efforts of the Broadcasting Board of Governors, which oversees all nonmilitary U.S. international broadcasting by the Voice of America (VOA) and several other broadcast entities. Such broadcasting helps promote a better understanding of the United States and serves U.S. interests by providing overseas audiences with accurate and objective news about the United States and the world. GAO has issued three reports over the past 4 years examining the organizational, marketing, resource, and performance reporting challenges faced by the Board. Our recommendations to the Board have included the need to address the long-standing issue of overlapping language services (i.e., where two services broadcast in the same language to the same audience) and to strengthen the Board's strategic planning and performance by placing a greater emphasis on results. The Board has taken significant steps to respond to these and other recommendations. The Broadcasting Board of Governors has responded to a disparate organizational structure and marketing challenges by developing a new strategic approach to broadcasting which, among other things, emphasizes reaching large audiences through modern broadcasting techniques. Organizationally, the existence of five separate broadcast entities has led to overlapping language services, duplication of program content, redundant newsgathering and support services, and difficulties coordinating broadcast efforts. Marketing challenges include outmoded program formats, poor signal delivery, and low audience awareness in many markets. Alhurra television broadcasts to the Middle East and Radio Farda broadcasts to Iran illustrate the Board's efforts to better manage program content and meet the needs of its target audiences. Although we have not validated available research data, the Board claims that the application of its new approach has led to dramatic increases in listening rates in key Middle East markets. To streamline its operations, the Board has used its annual language service review process to address such issues as how resources should be allocated among language services on the basis of their priority and impact, what degree of overlap should exist among services, and whether services should be eliminated because they have fulfilled their broadcast mission. Since 1999, the Board has identified more than $50 million in actual or potential savings through this process. In response to our recommendations on the Board's strategic planning and performance management efforts, the Board revised its strategic plan to make reaching large audiences in strategic markets the centerpiece of its performance reporting system. The Board also added broadcaster credibility and audience awareness to its array of performance measures and plans to add a measure of whether VOA is meeting its mandated mission.
As technology has advanced, the federal government has become increasingly dependent on computerized information systems to carry out operations and process, maintain, and report essential information. Federal agencies rely on such systems to process, maintain, and report large volumes of sensitive data, such as personal information. Ineffective protection of these systems and information can impair delivery of vital services and result in loss or theft of computer resources, assets, and funds; inappropriate access to and disclosure, modification, or destruction of sensitive information, such as PII; undermining of agency missions due to embarrassing incidents that erode the public’s confidence in government; damage to networks and equipment; and high costs for remediation. Recognizing the importance of these issues, federal law includes requirements intended to improve the protection of government information and systems. These laws include the Federal Information Security Modernization Act (FISMA) of 2014, which among other things, requires the head of each agency to provide information security protections commensurate with the risk and magnitude of harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of the agency’s information or information systems. More specifically, federal agencies are to develop, document, and implement an agency-wide information security program to provide security for the information and information systems that support the operations of the agency, including those provided or managed by another agency, a contractor, or other organization on behalf of the agency. In addition, the head of each agency is responsible for, among other things, ensuring that senior agency officials carry out their information security responsibilities and that all personnel are held accountable for complying with the agency-wide information security program. The act also assigned OMB and the Department of Homeland Security (DHS) oversight responsibilities to assist agencies in effectively implementing information security protections. In addition, NIST is responsible for developing standards and guidelines that include minimum information security requirements. IRS’s mission is to provide America’s taxpayers top-quality service by helping them to understand and meet their tax responsibilities and to enforce the law with integrity and fairness to all. In carrying out its mission, IRS relies extensively on computerized information systems, which it must effectively secure to protect sensitive financial and taxpayer data for the collection of taxes, processing of tax returns, and enforcement of federal tax laws. During fiscal year 2015, IRS collected more than $3.3 trillion; processed more than 243 million tax returns and other forms; and issued more than $403 billion in tax refunds. IRS employs about 90,000 people in its Washington, D.C., headquarters and at more than 550 offices in all 50 states, U.S. territories, and some U.S. embassies and consulates. To manage its data and information, the agency operates two enterprise computing centers. It also collects and maintains a significant amount of personal and financial information on each U.S. taxpayer. Protecting this sensitive information is essential to protecting taxpayers’ privacy and preventing financial loss and damages that could result from identity theft and other financial crimes. Further, the size and complexity of the IRS add unique operational challenges. The Commissioner of Internal Revenue has overall responsibility for ensuring the confidentiality, integrity, and availability of the information and systems that support the agency and its operations. Within IRS, the senior agency official responsible for information security is the Associate CIO, who heads the IRS Information Technology Cybersecurity organization. Risks to cyber-based assets can originate from unintentional or intentional threats. Unintentional threats can be caused by natural disasters, defective computer or network equipment, software coding errors, and the actions of careless or poorly trained employees. Intentional threats include targeted and untargeted attacks from criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. These adversaries vary in terms of their capabilities, willingness to act, and motives. These threat sources make use of various techniques—or exploits—that may adversely affect federal information, computers, software, networks, and operations. These exploits are carried out through various conduits, including websites, e-mails, wireless and cellular communications, Internet protocols, portable media, and social media. Further, adversaries can leverage common computer software programs as a means by which to deliver a threat by embedding exploits within software files that can be activated when a user opens a file within its corresponding program. The number of information security incidents affecting systems supporting the federal government is increasing. Specifically, the number of incidents reported by federal agencies to the U.S. Computer Emergency Readiness Team (US-CERT) increased from 5,503 in fiscal year 2006 to 67,168 in fiscal year 2014, an increase of 1,121 percent. This upward trend continues. According to OMB, agencies reported 77,183 incidents in fiscal year 2015. Similarly, the number of incidents involving PII reported by federal agencies has more than doubled in recent years, from 10,481 in 2009 to 27,624 in 2014. Moreover, for fiscal year 2015, OMB reported that federal agencies spent about $13.1 billion on cybersecurity, and agencies budgeted about $14 billion for cybersecurity for fiscal year 2016. This amount may increase significantly, as the president’s fiscal year 2017 budget proposes investing over $19 billion in resources for cybersecurity. Cyber incidents can adversely affect national security, damage public health and safety, and compromise sensitive information. Regarding IRS specifically, two recent incidents illustrate the impact on taxpayer and other sensitive information: In June 2015, the Commissioner of the IRS testified that unauthorized third parties had gained access to taxpayer information from its Get Transcript application. According to officials, criminals used taxpayer- specific data acquired from non-agency sources to gain unauthorized access to information on approximately 100,000 tax accounts. These data included Social Security information, dates of birth, and street addresses. In an August 2015 update, IRS reported this number to be about 114,000, and reported that an additional 220,000 accounts had been inappropriately accessed. In a February 2016 update, the agency reported that an additional 390,000 accounts had been accessed. Thus, about 724,000 accounts were reportedly affected. The online Get Transcript service has been unavailable since May 2015. In March 2016, IRS stated that as part of its ongoing security review, it had temporarily suspended the Identity Protection Personal Identification Number (IP PIN) service on IRS.gov. The IP PIN is a single-use identification number provided to taxpayers who are victims of identity theft (IDT) to help prevent future IDT refund fraud. The service on IRS’s website allowed taxpayers to retrieve their IP PINs online by passing IRS’s authentication checks. These checks confirm taxpayer identity by asking for personal, financial, and tax-related information. The IRS stated that it was conducting further review of the IP PIN service and is looking at further strengthening the security features. As of April 7, the online service was still suspended. As we reported in March 2016, IRS has implemented numerous protections over key financial and tax processing systems; however, it had not always effectively implemented access and other controls, including elements of its information security program. Access controls are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. They include identification and authentication, authorization, cryptography, audit and monitoring, and physical security, among others. In our most recent review we determined that IRS had improved access controls, but some weaknesses remain. Identifying and authenticating users—such as through user account-password combinations—provides the basis for establishing accountability and controlling access to a system. IRS established policies for identification and authentication, including requiring multifactor authentication for local and network access accounts and establishing password complexity and expiration requirements. It also improved identification and authentication controls by, for example, expanding the use of an automated mechanism to centrally manage, apply, and verify password requirements. However, weaknesses in identification and authentication controls remained. For example, the agency used easily guessable passwords on servers supporting key systems. In addition, while IRS continued to expand the use of two- factor access to its network, the Treasury Inspector General for Tax Administration reported that IRS had not fully implemented unique user identification and authentication or remote electronic authentication that complies with federal requirements. Authorization controls limit what actions users are able to perform after being allowed into a system and should be based on the concept of “least privilege,” granting users the least amount of rights and privileges necessary to perform their duties. While IRS established policies for authorizing access to its systems, it continued to permit excessive access in some cases. For example, users were granted rights and permissions in excess of what they needed to perform their duties, including for an application used to process electronic tax payment information and a database on a human resources system. Cryptography controls protect sensitive data and computer programs by rendering data unintelligible to unauthorized users and protecting the integrity of transmitted or stored data. IRS policies require the use of encryption, and the agency continued to expand its use of encryption to protect sensitive data. However, key systems we reviewed had not been configured to encrypt sensitive user authentication data. Audit and monitoring is the regular collection, review, and analysis of events on systems and networks in order to detect, respond to, and investigate unusual activity. IRS established policies and procedures for auditing and monitoring its systems and continued to enhance its capability by, for example, implementing an automated mechanism to log user activity on its access request and approval system. But it had not established logging for two key applications used to support the transfer of financial data and access and manage taxpayer accounts; nor was the agency consistently maintaining key system and application audit plans. Physical security controls, such as physical access cards, limit access to an organization’s overall facility and areas housing sensitive IT components. IRS established policies for physically protecting its computer resources and physical security controls at its enterprise computer centers, such as a dedicated guard force at each of its computer centers. However, the agency had yet to address weaknesses in its review of access lists for both employees and visitors to sensitive areas. IRS also had weaknesses in configuration management controls, which are intended to prevent unauthorized changes to information system resources (e.g., software and hardware) and provide assurance that systems are configured and operating securely. Specifically, while IRS developed policies for managing the configuration of its IT systems and improved some configuration management controls, it did not, for example, ensure security patch updates were applied in a timely manner to databases supporting two key systems we reviewed, including a patch that had been available since August 2012. To its credit, IRS had established contingency plans for the systems we reviewed, which help ensure that when unexpected events occur critical operations can continue without interruption or can be promptly resumed, and that information resources are protected. Specifically, IRS had established policies for developing contingency plans for its information systems and for testing those plans, as well as for implementing and enforcing backup procedures. Moreover, the agency had documented and tested contingency plans for its systems and improved continuity of operations controls for several systems. Nevertheless, the control weaknesses can be attributed in part to IRS’s inconsistent implementation of elements of its agency-wide information security program. The agency established a comprehensive framework for its program, including assessing risk for its systems, developing system security plans, and providing employees with security awareness and specialized training. However, IRS had not updated key mainframe policies and procedures to address issues such as comprehensively auditing and monitoring access. In addition, the agency had not fully mitigated previously identified deficiencies or ensured that its corrective actions were effective. During our most recent review, IRS told us it had completed corrective actions for 28 of our prior recommendations; however, we determined that 9 of these had not been effectively implemented. The collective effect of the deficiencies in information security from prior years that continued to exist in fiscal year 2015, along with the new deficiencies we identified, are serious enough to merit the attention of those charged with governance of IRS and therefore represented a significant deficiency in IRS’s internal control over financial reporting systems as of September 30, 2015. To assist IRS in fully implementing its agency-wide information security program, we made two new recommendations to more effectively implement security-related policies and plans. In addition, to assist IRS in strengthening security controls over the financial and tax processing systems we reviewed, we made 43 technical recommendations in a separate report with limited distribution to address 26 new weaknesses in access controls and configuration management. Implementing these recommendations—in addition to the 49 outstanding recommendations from previous audits—will help IRS improve its controls for identifying and authenticating users, limiting users’ access to the minimum necessary to perform their job-related functions, protecting sensitive data when they are stored or in transit, auditing and monitoring system activities, and physically securing its IT facilities and resources. Table 1 below provides the number of our prior recommendations to IRS that were not implemented at the beginning of our fiscal year 2015 audit, how many were resolved by the end of the audit, new recommendations, and the total number of outstanding recommendations at the conclusion of the audit. In commenting on drafts of the reports presenting the results of our fiscal year 2015 audit, the IRS Commissioner stated that while the agency agreed with our new recommendations, it will review them to ensure that its actions include sustainable fixes that implement appropriate security controls balanced against IT and human capital resource limitations. We have also previously reported that IRS can take steps to improve its response to data breaches involving the inappropriate disclosure—or potential disclosure—of personally identifiable information. Specifically, in December 2013 we reported on the extent to which data breach policies at eight agencies, including IRS, adhered to requirements and guidance set forth by OMB and NIST. While the agencies in our review generally had policies and procedures in place that reflected the major elements of an effective data breach response program, implementation of these policies and procedures was not consistent. With respect to IRS, we determined that its policies and procedures generally reflected key practices, although the agency did not require considering the number of affected individuals as a factor when determining if affected individuals should be notified of a suspected breach. In addition, IRS did not document lessons learned from periodic analyses of its breach response efforts. We recommended that IRS correct these weaknesses, but the agency has yet to fully address them. The importance of protecting taxpayer information is further highlighted by the billions of dollars that have been lost to IDT refund fraud, which continues to be an evolving threat. IDT refund fraud occurs when a refund-seeking fraudster obtains an individual’s Social Security number, date of birth, or other PII and uses it to file a fraudulent tax return seeking a refund. This crime burdens legitimate taxpayers because authenticating their identities is likely to delay the processing of their tax returns and refunds. Moreover, the victim’s PII can potentially be used to commit other crimes. Given current and emerging risks, in 2015 we expanded our high-risk area on the enforcement of tax laws to include IRS’s efforts to address IDT refund fraud. IRS develops estimates of the extent of IDT refund fraud to help direct its efforts to identify and prevent the crime. While its estimates have inherent uncertainty, IRS estimated that it prevented or recovered $22.5 billion in fraudulent IDT refunds in filing season 2014. However, it also estimated that it paid $3.1 billion in fraudulent IDT refunds. IRS has taken steps to address IDT refund fraud; however, it remains a persistent and evolving threat. For example in its fiscal year 2014-2017 strategic plan, IRS increased resources dedicated to combating IDT and other types of refund fraud. In 2015, IRS reported allocating more than 4,000 full-time equivalent staff and spending $470 million on refund fraud and IDT activities. In addition, IRS received an additional $290 million for fiscal year 2016 to improve customer service, IDT identification and prevention, and cybersecurity efforts. The agency has also taken actions to improve customer service related to IDT fraud by, for example, providing an increased level of service to taxpayers calling its identity theft toll-free phone line. In addition, IRS has worked with tax preparation professionals, states, and financial institutions to better detect and prevent IDT fraud. These efforts notwithstanding, fraudsters continue to adapt their schemes to identify weaknesses in IDT defense, such as by gaining access to taxpayers’ tax return transcripts through IRS’s online Get Transcript service. According to IRS officials, this allows fraudsters to create historically consistent returns that are hard to distinguish from one filed by a legitimate taxpayer. These continuing challenges highlight the need for additional actions by IRS. As we have reported, there are steps IRS can take to, among other things, better authenticate the identity of taxpayers before issuing refunds. In January 2015 we reported that IRS’s authentication tools have limitations. For example, individuals could obtain an e-file PIN by providing their name, Social Security number, date of birth, address, and filing status for IRS’s e-file PIN application. Identity thieves can easily find this information, allowing them to bypass some, if not all, of IRS’s automatic checks. After filing an IDT return using an e-file PIN, the fraudster could file a fraudulent return through IRS’s normal return processing. Accordingly, we recommended that IRS assess the costs, benefits, and risks of its authentication options. In November 2015, IRS officials told us that the agency had developed guidance for its Identity Assurance Office to assess costs, benefits, and risk of authentication tools. In February 2016, officials told us that this office plans to complete a strategic plan for taxpayer authentication across the agency in September 2016. Until it completes these steps, IRS will lack key information to make decisions about whether and how much to invest in authentication options. Under FISMA, the Director of OMB is responsible for developing and overseeing the implementation of policies, principles, standards, and guidelines on information security in federal agencies, except with regard to national security and certain other systems. The director is also responsible for coordinating the development of standards and guidelines by NIST. For its part, NIST is responsible under FISMA for developing security standards and guidelines for agencies that include standards for categorizing information and information systems according to ranges of impact levels, minimum security requirements for information and information systems in risk categories, guidelines for detection and handling of information security incidents, and guidelines for identifying an information system as a national security system. Accordingly, OMB and NIST have prescribed policies, standards, and guidelines that are intended to assist federal agencies with identifying and providing information security protections commensurate with the risk and magnitude of harm resulting from the unauthorized access, use, disclosure, alteration, and destruction of information and information systems, including those systems operated by a contractor or others on behalf of the agency. These include the following: OMB M-14-03, Enhancing the Security of Federal Information and Information Systems, which provides agencies with direction for managing information security risk on a continuous basis, including requirements for establishing information security continuous monitoring programs. NIST, Federal Information Processing Standard 199, Standards for Security Categorization of Federal Information and Information Systems, requires agencies to categorize their information systems as low-impact, moderate-impact, or high-impact for the security objectives of confidentiality, integrity, and availability. NIST Federal Information Processing Standard 200, Minimum Security Requirements for Federal Information and Information Systems, specifies minimum security requirements for federal agency information and information systems and a risk-based process for selecting the security controls necessary to satisfy these requirements. NIST Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations, provides a catalog of security and privacy controls for federal information systems and organizations and a process for selecting controls. OMB and NIST also have provided guidance to agencies on procedures for authenticating users to federal systems and websites, including the following: OMB M-15-13, Policy to Require Secure Connections across Federal Websites and Web Services, which requires all publicly accessible federal websites and web services to provide service through a secure connection. OMB M-04-04, E-Authentication Guidance for Federal Agencies, which addresses federal government services accomplished using the Internet, instead of on paper, and calls for identity verification or authentication to make sure that online government services are secure and protect privacy. This guidance established four levels of identity assurance for electronic transactions requiring authentication. Each level describes the agency’s degree of certainty that a user has presented an identifier that refers to his or her identity: Level 1: little or no confidence in the asserted identity’s validity. Level 2: some confidence in the asserted identity’s validity. Level 3: high confidence in the asserted identity’s validity. Level 4: very high confidence in the asserted identity’s validity. NIST Special Publication 800-63-2, Electronic Authentication Guideline, provides technical guidelines for federal agencies implementing electronic authentication and covers remote authentication of users (such as employees, contractors, or private individuals) interacting with government IT systems over open networks. Specifically, it provides technical requirements for agencies to use in selecting technology to achieve specified levels of e- authentication assurance, as defined by OMB and illustrated by the following examples: Level 1: Identity proofing is not required. Successful authentication occurs when an individual proves through the means of authentication that he or she possesses and controls the token. The cryptographic methods used at this level may still allow someone with malicious intent to intercept the transmission of a password through eavesdropping and crack it using a dictionary attack (i.e., guessing a password through trial-and-error using a dictionary). Level 2: Requires single-factor remote authentication, using one of three factors—something you know (e.g., a password), something you have (e.g., an identification badge), or something you are (e.g., a fingerprint). Identity proofing requirements are introduced, requiring presentation of identifying materials or information. Approved cryptographic methods would not allow the type of eavesdropping attack that is possible at Level 1. Level 3: Requires multi-factor remote authentication, requiring at least two authentication factors. An individual proves possession of a physical or software token in combination with some memorized knowledge. Approved cryptographic methods should be strong enough to protect against impersonation of the verifying entity. Level 4: Is intended to provide the highest practical remote network authentication assurance, requiring the proof of possession of a key through a cryptographic protocol. At this level in-person identity proofing is required. It is otherwise similar to Level 3, except with stronger cryptographic methods in place. Federal law also gives OMB and DHS responsibility and authority for oversight of operational aspects of federal information security. In particular, the OMB Director is charged with overseeing and enforcing agency compliance with information security requirements by taking certain actions authorized by relevant federal law (discussed in more detail below), and OMB has developed various mechanisms to carry out its oversight function. Budgetary authority: Federal law gives OMB the power of enforcement and accountability related to evaluating agencies’ management of their information resources, which includes ensuring that information security policies, procedures, and practices are adequate. In particular, in enforcing accountability, OMB is empowered to recommend reductions or increases in an agency’s budget and restrict the availability of funds for information resources, among other things. OMB Cyber Unit: In fiscal year 2015, OMB established the OMB Cyber and National Security Unit (OMB Cyber) within the Office of the Federal Chief Information Officer. This unit is responsible for strengthening federal cybersecurity through oversight of agency and government-wide programs, issuing and implementing policies to address emerging IT security risks, and oversight of government-wide response to major incidents and vulnerabilities. CyberStat Reviews: OMB has also established the “CyberStat Review” process, which involves evidence-based meetings led by OMB to ensure agencies are accountable for their cybersecurity posture, while assisting them in developing targeted, tactical actions to deliver results. FISMA reporting: As required by FISMA, OMB reports annually to Congress on the effectiveness of information security policies and practices at executive branch agencies during the preceding year and a summary of evaluations conducted by agency inspectors general. Regarding DHS, the Federal Information Security Modernization Act of 2014 codified its responsibility for certain operational aspects of federal agency cybersecurity. In particular, DHS is responsible for administering, in consultation with OMB, the implementation of agency information security policies and practices for information systems (other than national security systems, Department of Defense, and the intelligence community’s “debilitating impact” systems); developing, issuing, and overseeing the implementation of binding operational directives to agencies on matters such as incident reporting, contents of agency’s annual reports, and other operational requirements; and operating the federal information security incident center (the U.S Computer Emergency Readiness Team or US-CERT), deploying technology to continuously diagnose and mitigate threats, compiling and analyzing data, and developing and conducting targeted operational evaluations, including threat and vulnerability assessments of systems. In May 2015 DHS issued its first directive, which required all departments and agencies to review and mitigate all critical vulnerabilities on their Internet-facing systems. DHS identifies these vulnerabilities using scanning tools and reports the results to agencies on a weekly basis. Agencies are then required to mitigate the DHS-identified vulnerabilities within 30 days of the report, or provide a justification to DHS outlining barriers, planned steps for resolution, and a time frame for mitigation. DHS has also supplied agencies with tools and technologies to assist in protecting against cyber threats and vulnerabilities. For example: Continuous Diagnostics and Mitigation Program: Since fiscal year 2013, DHS has provided agencies the opportunity to use a suite of tools and capabilities to identify cybersecurity risks on an ongoing basis, prioritize these risks based on potential impacts, and enable cybersecurity personnel to mitigate the most significant problems first. National Cybersecurity Protection System: NCPS is an integrated system-of-systems intended to deliver a range of capabilities for intrusion detection, intrusion prevention, analytics, and information sharing. When deployed on an agency’s connection to the Internet, the system monitors inbound and outbound traffic for malicious activity. In summary, while IRS has made progress in implementing information security controls, it needs to continue to address weaknesses in access controls and configuration management and consistently implement all elements of its information security program. The risks IRS is exposed to have been illustrated by recent incidents involving public-facing applications, highlighting the importance of securing systems that contain sensitive taxpayer and financial data. In addition, fully implementing key elements of a breach response program will help ensure that when breaches of sensitive data do occur, their impact on affected individuals will be minimized. IRS also needs to assess the costs, benefits, and risks of alternatives for better authenticating taxpayers who access its systems. Finally, strengthening the security posture of IRS—and other agencies— also depends on the key roles played by OMB, NIST, and DHS in providing oversight and guidance from a government-wide perspective, such as that related to improving authentication. Chairwoman Comstock, Ranking Member Lipinski, and Members of the Subcommittee, this concludes my statement. I would be happy to answer any questions you have. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or wilshuseng@gao.gov, Nancy Kingsbury at (202) 512-2928 or kingsburyn@gao.gov, or James R. McTigue, Jr. at (202) 512-9110 or mctiguej@gao.gov. Other key contributors to this statement include Jeffrey Knott, Larry Crosland, John de Ferrari, and Neil A. Pinney (assistant directors); Dawn E. Bidne; Mark Canter; James Cook, Shannon J. Finnegan; Lee McCracken; Justin Palk; J. Daniel Paulk; Monica Perez-Nelson; David Plocher; Erin Saunders Rath; and Daniel Swartz. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In collecting taxes, processing returns, and providing taxpayer service, IRS relies extensively on computerized information systems. Accordingly, it is critical that sensitive taxpayer and other data are protected. Recent data breaches at IRS highlight the vulnerability of taxpayer information. In addition, identity theft refund fraud is an evolving threat that occurs when a thief files a fraudulent tax return using a legitimate taxpayer's identity and claims a refund. Since 1997, GAO has designated federal information security as a government-wide high-risk area, and in 2015 it expanded this area to include the protection of personally identifiable information. GAO also added identity theft refund fraud to its high-risk area on the enforcement of tax laws. This statement discusses (1) IRS's information security controls over tax processing and financial systems and (2) roles that federal agencies with government-wide information security responsibilities play in providing guidance and oversight to agencies. This statement is based on previously published GAO work and a review of federal guidance. In March 2016 GAO reported that the Internal Revenue Service (IRS) had instituted numerous controls over key financial and tax processing systems; however, it had not always effectively implemented safeguards intended to properly restrict access to systems and information. In particular, while IRS had improved some of its access controls, weaknesses remained with identifying and authenticating users, authorizing users' level of rights and privileges, encrypting sensitive data, auditing and monitoring network activity, and physically securing its computing resources. These weaknesses were due in part to IRS's inconsistent implementation of its agency-wide security program, including not fully implementing GAO recommendations. The table below shows the status of prior and new GAO recommendations as of the end of its fiscal year (FY) 2015 audit of IRS's information security. GAO concluded that these weaknesses collectively constituted a significant deficiency for the purposes of financial reporting for fiscal year 2015. Until they are effectively mitigated, taxpayer and financial data will continue to be exposed to unnecessary risk. The importance of protecting taxpayer information is further highlighted by the billions of dollars that have been lost to identity theft refund fraud, which continues to be an evolving threat. While IRS has taken steps to address this issue, as GAO reported in January 2015 it has yet to assess the costs, benefits, and risks of methods for improving the authentication of taxpayers' identity. The Office of Management and Budget (OMB), National Institute of Standards and Technology (NIST) and the Department of Homeland Security (DHS) provide government-wide guidance and oversight for federal information security. These agencies have taken a number of actions to carry out these responsibilities. For example: OMB has prescribed security policies, including direction on ensuring that online services provided by agencies are secure and protect privacy. NIST has developed standards and guidelines for implementing security controls, including those for authenticating users during online transactions. DHS has issued a directive requiring departments and agencies to mitigate critical vulnerabilities on their Internet-facing systems. It also assists agencies in monitoring their networks for malicious traffic. In addition to 49 prior recommendations that had not been implemented, GAO made 45 new recommendations to IRS in March 2016 to further improve its information security controls and program. GAO also recommended that IRS assess costs, benefits, and risks of taxpayer authentication options.
Today, federal employees are issued a wide variety of identification (ID) cards, which are used to access federal buildings and facilities, sometimes solely on the basis of visual inspection by security personnel. These cards often cannot be used for other important identification purposes—such as gaining access to an agency’s computer systems—and many can be easily forged or stolen and altered to permit access by unauthorized individuals. In general, the ease with which traditional ID cards—including credit cards—can be forged has contributed to increases in identity theft and related security and financial problems for both individuals and organizations. Smart cards are plastic devices about the size of a credit card that contain an embedded integrated circuit chip capable of both storing and processing data. Figure 1 shows a typical example of a smart card. The unique advantage of smart cards—as opposed to cards with simpler technology, such as magnetic stripes or bar codes—is that smart cards can exchange data with other systems and process information rather than simply serving as static data repositories. By securely exchanging information, a smart card can help authenticate the identity of the individual possessing the card in a far more rigorous way than is possible with simpler, traditional ID cards. A smart card’s processing power also allows it to exchange and update many other kinds of information with a variety of external systems, which can facilitate applications such as financial transactions or other services that involve electronic record keeping. Smart cards can also be used to significantly enhance the security of an organization’s computer systems by tightening controls over user access. A user wishing to log on to a computer system or network with controlled access must “prove” his or her identity to the system—a process called authentication. Many systems authenticate users by merely requiring them to enter secret passwords, which provide only modest security because they can be easily compromised. Substantially better user authentication can be achieved by supplementing passwords with smart cards. To gain access under this scenario, a user is prompted to insert a smart card into a reader attached to the computer as well as type in a password. This authentication process is significantly harder to circumvent because an intruder would need not only to guess a user’s password but also to possess the same user’s smart card. Even stronger authentication can be achieved by using smart cards in conjunction with biometrics. Smart cards can be configured to store biometric information (such as fingerprint templates or iris scans) in electronic records that can be retrieved and compared with an individual’s live biometric scan as a means of verifying that person’s identity in a way that is difficult to circumvent. A system requiring users to present a smart card, enter a password, and verify a biometric scan provides what security experts call “three-factor” authentication, the three factors being “something you possess” (the smart card), “something you know” (the password), and “something you are” (the biometric). Systems employing three-factor authentication are considered to provide a relatively high level of security. The combination of smart cards and biometrics can provide equally strong authentication for controlling access to physical facilities. Smart cards can also be used in conjunction with PKI technology to better secure electronic messages and transactions. A properly implemented and maintained PKI can offer several important security services, including assurance that (1) the parties to an electronic transaction are really whom they claim to be, (2) the information has not been altered or shared with any unauthorized entity, and (3) neither party will be able to wrongfully deny taking part in the transaction. An essential component is the use of electronic encryption keys, called “private keys,” that are unique to each user and must be kept secret and secure. For example, storing and using private keys on a user’s computer leaves them susceptible to attack because a hacker who gains control of that computer may then be able to use the private key stored in it to fraudulently sign messages and conduct electronic transactions. However, if the private key is stored on a user’s smart card, it may be significantly less vulnerable to attack and compromise. Security experts generally agree that PKI technology is most effective when deployed in conjunction with smart cards. In addition to enhancing security, smart cards have the flexibility to support a wide variety of uses not related to security. A typical smart card in use today can store and process 16 to 32 kilobytes of data, while newer cards can accommodate 64 kilobytes. The larger the card’s electronic memory, the more functions can be supported, such as tracking itineraries for travelers, linking to immunization or other medical records, or storing cash value for electronic purchases. Other media—such as magnetic stripes, bar codes, and optical memory (laser-readable) stripes—can be added to smart cards to support interactions with existing systems and services or provide additional storage capacity. For example, an agency that has been using magnetic stripe cards for access to certain facilities could migrate to smart cards that would work with both its existing magnetic stripe readers as well as new smart card readers. Of course, the functions provided by the card’s magnetic stripe, which cannot process transactions, would be much more limited than those supported by the card’s integrated circuit chip. Optical memory stripes (which are similar to the technology used in commercial compact discs) can be used to equip a card with a large memory capacity for storing more extensive data—such as color photos, multiple fingerprint images, or other digitized images—and making that card and its stored data very difficult to counterfeit. Smart cards are grouped into two major classes: contact cards and “contactless” cards. Contact cards have gold-plated contacts that connect directly with the read/write heads of a smart card reader when the card is inserted into the device. Contactless cards contain an embedded antenna and work when the card is waved within the magnetic field of a card reader or terminal. Contactless cards are better suited for environments where quick interaction between the card and reader is required, such as high- volume physical access. For example, the Washington Metropolitan Area Transit Authority has deployed an automated fare collection system using contactless smart cards as a way of speeding patrons’ access to the Washington, D.C., subway system. Smart cards can be configured to include both contact and contactless capabilities, but two separate interfaces are needed, because standards for the technologies are very different. Figure 2 shows some of the capabilities and features that can be included in smart cards. Since the 1990s, the federal government has considered the use of smart card technology as one option for electronically improving security over buildings and computer systems. In 1996, GSA was tasked with taking the lead in facilitating a coordinated interagency management approach for the adoption of multiapplication smart cards across government. The tasking came from OMB, which has statutory responsibility to develop and oversee policies, principles, standards, and guidelines used by agencies for ensuring the security of federal information and systems. At the time, OMB envisioned broad adoption of smart card technology throughout the government, as evidenced by the President’s budget for fiscal year 1998, which set a goal of enabling every federal employee ultimately to be able to use one smart card for a wide range of purposes, including travel, small purchases, and building access. In January 1998, the President’s Management Council and the Electronic Processing Initiatives Committee (EPIC) established an implementation plan for smart cards that called for a governmentwide, multiapplication card that would support a range of functions—including controlling access to government buildings—and operate as part of a standardized system. More recently, several legislative bills have been proposed or enacted in the wake of the events of September 11, 2001, to enhance national security and counterterrorism by using smart card and biometric technologies to better identify individuals entering the country or gaining access to mass transportation systems. Our objectives were to assess (1) the extent to which federal agencies have adopted smart card technologies and realized the associated benefits, (2) the challenges of adopting smart cards within federal agencies, and (3) the effectiveness of GSA in promoting the adoption of smart card technologies within the federal government. To assess the extent of smart card adoption by federal agencies and identify associated benefits and challenges, we reviewed smart card project documentation, cost estimates, and other studies from GSA; OMB; the Western Governors’ Association (WGA), which was responsible for a smart card project funded in part by the Departments of Agriculture and Health and Human Services; the Department of Justice’s Immigration and Naturalization Service; DOD; and the Departments of Interior, Transportation, Treasury, and Veterans Affairs (VA). We also held discussions with key officials from these organizations regarding project benefits and challenges. Discussions were also held with representatives of the Smart Card Alliance, an association of smart card technology vendors, regarding smart card technology benefits and challenges. In addition, we reviewed publicly available materials and reports on smart card technology issues and discussed key issues with representatives of these organizations. To assess GSA’s effectiveness in promoting the governmentwide adoption of smart cards, we reviewed contract task orders, examined pilot project documentation, and assessed smart card plans and other reports obtained from the agency. We also held discussions with key officials in GSA’s Office of Governmentwide Policy, Federal Technology Service, and Public Building Service to obtain information on internal pilot projects and other key plans and documents. We analyzed reports and evaluations on the smart card program obtained from GSA’s Office of Inspector General. To obtain information on whether GSA had taken an effective leadership role in fostering the adoption of smart card technology across government, we interviewed officials from NIST; DOD; VA; the Departments of Interior, Transportation, and Treasury; and OMB. We also interviewed officials from WGA. We performed our work between April and October 2002 in accordance with generally accepted government auditing standards. Since 1998, multiple smart card projects have been launched, addressing an array of capabilities and providing many tangible and intangible benefits, such as ways to better authenticate the identity of cardholders, increase security over buildings, safeguard computer systems and data, and conduct financial and nonfinancial transactions more accurately and efficiently. For some federal agencies, the benefits of using smart card technology (such as improving security over federal buildings and systems and achieving other business-related purposes) have only recently been recognized, and many agencies are still planning projects or evaluating the benefits of this technology before proceeding with more wide-scale initiatives. Still, results from several ongoing smart card projects suggest that the technology offers federal agencies a variety of benefits. According to information obtained from GSA, OMB, and other federal agencies, as of November 2002, 18 federal agencies were planning, testing, operating, or completing a total of 62 smart card projects. These projects varied widely in size and technical complexity, ranging from small-scale, limited-duration pilot projects to large-scale, agencywide initiatives providing multiple services. The projects were reported to be in varying stages of deployment. Specifically, 13 projects were in the planning stage, and 7 were being piloted. An additional 17 projects were listed as operational, and 13 had been completed. No information was provided about the project phase of the remaining 12 initiatives; it is not clear whether these projects had moved beyond the planning or pilot testing phases. Figure 3 shows the status of the 62 federal smart card projects identified by GSA and OMB. Table 1 provides additional summary information about these projects. Many pilot projects initiated in the late 1990s deployed smart cards for specific, limited purposes in order to demonstrate the usefulness of the technology. For example, GSA distributed smart cards to approximately 3,000 staff and visitors at the 1997 presidential inauguration to control physical access to that event. The cards contained information that granted individuals access to specific event activities and allowed security personnel to monitor movements within the event’s headquarters facility as well as maintain records on those entering secure areas. Likewise, many smart card pilot projects were implemented by the military services to demonstrate the technology’s usefulness in enhancing specific business operations, such as creating electronic manifests to help deploy military personnel more efficiently, managing medical records for military personnel, and providing electronic cash to purchase goods and food services at remote locations. Officials at military bases and installations participating in these pilots reported that smart cards significantly reduced the processing time required for deploying military personnel—from several days to just a few hours. Recently, broader and more permanent projects have begun. Among federal agencies, DOD has made a substantial investment in developing and implementing an agencywide smart card system. DOD’s CAC is to be used to authenticate the identity of nearly 4 million military and civilian personnel and to improve security over on-line systems and transactions. The cards are being deployed in tandem with the rollout of a departmentwide PKI. As of November 2002, DOD had issued approximately 1.4 million CACs to military and civilian personnel and had purchased card readers and middleware for about 1 million of its computers. More information about DOD’s program appears in appendix I. The Department of Transportation is also developing two large smart card pilot projects, which will be focused on controlling access to and improving security at the nation’s many transportation hubs as well as at federal facilities controlled by the department. One pilot aims to distribute smart cards to approximately 10,000 FAA employees and contractor personnel for access to the department’s facilities. Subsequent phases will be implemented across the agency to approximately 100,000 employees. In the second pilot, transportation worker identification cards will be issued to about 15 million transportation workers across the United States and is intended to improve physical and logical access to public transportation facilities. Transportation plans to document results from the pilot project, including benefits and costs. Other federal agencies are now using smart cards for controlling logical access to computer systems and networks. For example, the Internal Revenue Service (IRS) distributed smart cards to approximately 30,000 of its revenue agents and officers for use when accessing the agency’s network remotely through notebook computers. According to an IRS official, the cards are still in use and working well. In July 2002, the Department of the Treasury announced plans to launch a pilot project to assess the use of smart cards for multiple purposes, including both physical and logical access. Treasury plans to distribute smart cards equipped with biometrics and PKI capabilities to approximately 7,200 employees during its pilot test. Treasury’s main department offices and five Treasury bureaus will be involved in the pilot test: U.S. Secret Service; IRS; Bureau of Alcohol, Tobacco, and Firearms; Bureau of Engraving and Printing; and the Federal Law Enforcement Training Center. According to Treasury officials, if the smart card pilot proves successful, it will be implemented across the department. While efforts such as these represent a recent trend toward adopting agencywide smart cards for security functions, almost half (42 percent) of the projects that have been undertaken to date, as identified by GSA and OMB, involved storing either cash value on the cards for use in making small purchases or other information for use in processing electronic payment transactions, transit benefits, or agency-specific applications. Many of these projects (45 percent) used smart cards that supported a combination of media, such as magnetic stripes, bar codes, and optical memory stripes. Further, the majority (86 percent) of these non–security- oriented projects involved cards used internally, usually to support formerly paper-based functions. For example, in October 1994, the 25th Infantry Division in Hawaii was issued 30,000 smart cards configured to support medical documentation, mobility processing, manifesting, personnel accountability, health care, and food service. In this pilot, the most notable benefit was seen in deployment readiness. The deployment process, which normally took a day or more, was reduced to a matter of hours. In another example of a stored-value card project, the Departments of Agriculture and Health and Human Services supported a project by the WGA to issue smart cards to approximately 12,000 individuals—including pregnant women, mothers, and children—who were eligible for electronic benefits transfer (EBT) programs such as the Women, Infants, and Children program, Head Start, Food Stamps, and other public health programs in three different states. The smart cards contained a circuit chip that included demographic, health, appointment, and EBT information, as well as a magnetic stripe that included Medicaid eligibility information. The smart cards also allowed grocery and retail establishments to track food purchases and rebate offers or coupon redemptions more accurately. Users helped control information stored on the card with a personal identification number and were provided with kiosks to read or view information stored on the card. According to WGA officials, the pilot was a success because participants had immediate access to healthcare appointment and immunization records. In addition, federal and state agencies were able to track benefits and baby formula purchases more accurately, resulting in manufacturers no longer questioning the process used by these government organizations to collect millions in rebate offers. To demonstrate that a single smart card could have many uses and provide many benefits, GSA’s Federal Technology Service introduced a multipurpose smart card to its employees during a pilot project conducted in the summer of 1999. The card functioned as a property management device, boarding pass for American Airlines, credit card for travel, and stored-value calling card. The card used fingerprint biometric technology, as well as digital certificates for use in signing E-mail messages. In addition, the card contained a contactless interface—an embedded antenna—that allowed cardholders to access transit services by waving the card near a card reader to electronically pay for these services. Appendix I provides more detailed information about smart card projects at several government agencies. The benefits of smart card adoption identified by agency officials can be achieved only if key management and technical challenges are understood and met. While these challenges have slowed the adoption of smart card technology in past years, they may be less difficult in the future, not only because of increased management concerns about securing federal facilities and information systems, but also because technical advances have improved the capabilities and reduced the cost of smart card systems. Major implementation challenges include sustaining executive-level commitment; coordinating diverse, cross-organizational needs and transforming achieving interoperability among smart card systems; and maintaining security and privacy. Nearly all the officials we interviewed indicated that maintaining executive- level commitment is essential to implementing a smart card system effectively. According to officials both within DOD and in civilian agencies, the formal mandate of the Deputy Secretary of Defense to implement a uniform, common access identification card within DOD was essential to getting a project as large as the CAC initiative launched and funded. The Deputy Secretary also assigned roles and responsibilities to the military services and agencies and established a deadline for defining smart card requirements. DOD officials noted that without such executive-level support and clear direction, the smart card initiative likely would have encountered organizational resistance and cost concerns that would have led to significant delays or cancellation. Treasury and Transportation officials also indicated that sustained high- level support had been crucial in launching smart card initiatives within their organizations and that without this support, funding for such initiatives probably would not have been available. In contrast, other federal smart card pilot projects have been cancelled due to lack of executive-level support. Officials at VA indicated that their pilot VA Express smart card project, which issued cards to veterans for use in registering at VA hospitals, would probably not be expanded to full-scale implementation, largely because executive-level priorities had changed, and support for a wide-scale smart card project had not been sustained. Smart card implementation costs can be high, particularly if significant infrastructure modifications are required or other technologies, such as biometrics and PKI, are being implemented in tandem with the cards. However, in light of the benefits of better authenticating personnel, increasing security over access to buildings, safeguarding computer systems and data, and conducting financial and nonfinancial transactions more accurately and efficiently, these costs may be acceptable. Key implementation activities that can be costly include managing contractors and card suppliers, developing systems and interfaces with existing personnel or credentialing systems, installing equipment and systems to distribute the cards, and training personnel to issue and use smart cards. As a result, agency officials stated that obtaining adequate resources was critical to implementing a major government smart card system. For example, Treasury’s project manager estimated the overall cost for the departmentwide effort at between $50 and $60 million; costs for the FAA pilot project, which have not yet been fully determined, are likely to exceed $2.5 million. At least $4.2 million was required to design, develop, and implement the WGA Health Passport Project (HPP) in Nevada, North Dakota, and Wyoming and to service up to 30,000 clients. A report on that project acknowledged that it was complicated and costly to manage card issuance activities. The states encountered problems when trying to integrate legacy systems with the smart cards and had difficulty establishing accountability among different organizations for data stored on and transferred from the cards. The report further indicated that help-desk services were difficult to manage because of the number of organizations and outside retailers, as well as different systems and hardware, involved in the project; costs for this service likely would be about $200,000 annually. WGA officials said they expect costs to decrease as more clients are provided with smart cards and the technology becomes more familiar to users; they also believe smart card benefits will exceed costs over the long term. The full cost of a smart card system can also be greater than originally anticipated because of the costs of related technologies, such as PKI. For example, DOD initially budgeted about $78 million for the CAC program in 2000 and 2001 and expected to provide the device to about 4 million military, civilian, and contract employees by 2003. It now expects to expend over $250 million by 2003—more than double the original estimate. Many of the increases in CAC program costs were attributed by DOD officials to underestimating the costs of upgrading and managing legacy systems and processes for card issuance. Card issuance costs likely will exceed $75 million out of the over $250 million now provided for CAC through 2003, based on information provided by DOD. These costs are for installing workstations, upgrading legacy systems, and distributing cards to personnel. According to DOD program officials, the department will likely expend over $1 billion for its smart cards and PKI capabilities by 2005. In addition to the costs mentioned above, the military services and defense agencies were required to fund the purchase of over 2.5 million card readers and the middleware to make them work with existing computer applications, at a cost likely to exceed $93 million by 2003. The military services and defense agencies are also expected to provide funding to enable applications to interoperate with the PKI certificates loaded on the cards. DOD provided about $712 million to issue certificates to cardholders as part of the PKI program but provided no additional funding to enable applications. The ability of smart card systems to address both physical and logical (information systems) security means that unprecedented levels of cooperation may be required among internal organizations that often had not previously collaborated, especially physical security organizations and IT organizations. Nearly all federal officials we interviewed noted that existing security practices and procedures varied significantly across organizational entities within their agencies and that changing each of these well-established processes and attempting to integrate them across the agency was a formidable challenge. Individual bureaus and divisions often have strong reservations about supporting a departmentwide smart card initiative because it would likely result in substantial changes to existing processes for credentialing individuals, verifying those credentials when presented at building entrances, and accessing and using computer systems. DOD officials stated that it has been difficult to take advantage of the multiapplication capabilities of its CAC for these very reasons. The card is primarily being used for logical access—for helping to authenticate cardholders accessing systems and networks and for digitally signing electronic transactions using PKI. DOD only recently has begun to consider ways to use the CAC across the department to better control physical access over military facilities. Few DOD facilities are currently using the card for this purpose. DOD officials said it had been difficult to persuade personnel responsible for the physical security of military facilities to establish new processes for smart cards and biometrics and to make significant changes to existing badge systems. In addition to the gap between physical and logical security organizations, the sheer number of separate and incompatible existing systems also adds to the challenge to establishing an integrated agencywide smart card system. One Treasury official, for example, noted that departmentwide initiatives, such as its planned smart card project, require the support of 14 different bureaus and services. Each of these entities has different systems and processes in place to control access to buildings, automated systems, and electronic transactions. Agreement could not always be reached on a single business process to address security requirements among these diverse entities. Interoperability is a key consideration in smart card deployment. The value of a smart card is greatly enhanced if it can be used with multiple systems at different agencies, and GSA has reported that virtually all agencies agree that interoperability at some level is critical to widespread adoption of smart cards across the government. However, achieving interoperability has been difficult because smart card products and systems developed in the past have generally been incompatible in all but very rudimentary ways. With varying products available from many vendors, there has been no obvious choice for an interoperability standard. GSA considered the achievement of interoperability across card systems to be one of its main priorities in developing its governmentwide Smart Access Common ID Card contract. Accordingly, GSA designed the contract to require awardees to work with GSA and NIST to develop a government interoperability specification. The specification, as it currently stands, includes an architectural model, interface specifications, conformance testing requirements, and data models. A key aspect of the specification is that it addresses aspects of smart card operations that are not covered by commercial standards. Specifically, the specification defines a uniform set of command and response messages for smart cards to use in communicating with card readers. Vendors can meet the specification by writing software for their cards that translates their unique command and response formats to the government standard. Such a specification previously had not been available. According to NIST officials, the first version of the interoperability specification, completed in August 2000, did not include sufficient detail to establish interoperability among vendors’ disparate smart card products. The officials stated that this occurred because representatives from NIST, the contractors, and other federal agencies had only a very limited time to develop the first version. Version 2, released in June 2002, is a significant improvement, providing better definitions of many details, such as how smart cards should exchange information with software applications and card readers. The revised specification also supports DOD’s CAC data model in addition to the common data model developed for the original specification. However, it may take some time before smart card products that meet the requirements of version 2 are made available, because the contractors and vendors (under the Smart Access Common ID contract) will have to update or redesign their products to meet the enhanced specification. Further, potential interoperability issues may arise for those agencies that purchased and deployed smart card products based on the original specification. While version 2 addressed important aspects of establishing interoperability among different vendors’ smart card systems, other aspects remain unaddressed. For example, the version 2 specifications for “basic services interface” provide for just 21 common functions, such as establishing and terminating a logical connection with the card in a specified reader. Other fundamental functions—such as changing personal ID numbers and registering cards when they are issued to users—are not included in the basic services interface. For such functions, vendors must use what are known as “extended service interfaces.” Because vendors are free to create their own unique definitions for extended service interfaces and associated software, interoperability problems may occur if interface designs or software programs are incompatible. NIST officials stated that, at the time the specification was finalized, it was not possible to define a standard for the functions not included in the basic services interface because existing commercial products varied too widely. According to the NIST officials, greater convergence is needed among smart card vendors’ products before agreement can be reached on standards for all important card functions—including changing passwords or personal identification numbers—as part of extended service interfaces. In addition, the guidelines do not address interoperability for important technologies such as contactless smart cards, biometrics, and optical memory stripes. GSA and NIST officials indicated that federal agencies are interested in adopting contactless and biometric technologies but that more needs to be done to evaluate the technologies and develop a standard architectural model to ensure interoperability across government. The government has not yet adopted industry-developed contactless and biometric standards, which are generally not extensive enough to ensure interoperability among commercial products from different vendors. According to one NIST official, a thorough risk assessment of optical stripe technology needs to be conducted first, because the security issues for a “passive” technology such as optical stripes are different from those of “active” chip-based smart cards. Although there is no work under way to include optical stripe technology as an option within the Government Smart Card Interoperability Specification, the guidance does not preclude the use of this technology. Although concerns about security are a key driver for the adoption of smart card technology in the federal government, the security of smart card systems is not foolproof and must be addressed when agencies plan the implementation of a smart card system. As discussed in the background section of this report, smart cards can offer significantly enhanced control over access to buildings and systems, particularly when used in combination with other advanced technologies, such as PKI and biometrics. Although smart card systems are generally much harder to attack than traditional ID cards and password-protected systems, they are not invulnerable. In order to obtain the improved security services that smart cards offer, care must be taken to ensure that the cards and their supporting systems do not pose unacceptable security risks. Smart card systems generally are designed with a variety of features designed to thwart attack. For example, cards are assigned unique serial numbers to counter unauthorized duplication and contain integrated circuit chips that are resistant to tampering so that their information cannot be easily extracted and used. However, security experts point out that because a smart-card-based system involves many different discrete elements that cannot be physically controlled at all times by an organization’s security personnel, there is at least a theoretically greater opportunity for malfeasance than would exist for a more self-contained system. In fact, a smart-card-based system involves many parties (the cardholders, data owner, computing devices, card issuer, card manufacturer, and software manufacturer) that potentially could pose threats to the system. For example, researchers have found ways to circumvent security measures and extract information from smart cards, and an individual cardholder could be motivated to attack his or her card in order to access and modify the stored data on the card—perhaps to change personal information or increase the cash value that may be stored on the card. Further, smart cards are connected to computing devices (such as agency networks, desktop and laptop computers, and automatic teller machines) through card readers that control the flow of data to and from the smart card. Attacks mounted on either the card readers or any of the attached computing systems could compromise the safeguards that are the goals of implementing a smart card system. Smart cards used to support multiple applications may introduce additional risks to the system. For example, if adequate care is not taken in designing and testing each software application, loading new applications onto existing cards could compromise the security of the other applications already stored on the cards. In general, guaranteeing the security of a multiapplication card can be more difficult because of the difficulty of determining which application is running inside a multiapplication smart card at any given time. If an application runs at an unauthorized time, it could gain unauthorized access to data intended only for other applications. As with any information system, the threats to a smart card system must be analyzed thoroughly and adequate measures developed to address potential vulnerabilities. Our 1998 report on effective security management practices used by leading public and private organizations and a companion report on risk-based security approaches identified key principles that can be used to establish a management framework for an effective information security program. In addition, the National Security Agency’s draft guidelines for placing biometrics in smart cards include steps that could be taken to help protect information in smart card systems, such as encrypting all private keys stored in the smart card and defining a system security policy with a user certification process before access to the system is granted. In addition to security, protecting the privacy of personal information is a growing concern and must be addressed with regard to the personal information contained on smart cards. Once in place, smart-card-based systems designed simply to control access to facilities and systems could also be used to track the day-to-day activities of individuals, potentially compromising their privacy. Further, smart-card-based systems could be used to aggregate sensitive information about individuals for purposes other than those prompting the initial collection of the information, which could compromise privacy. The Privacy Act of 1974 requires the federal government to restrict the disclosure of personally identifiable records maintained by federal agencies, while permitting individuals access to their own records and the right to seek amendment of agency records that are inaccurate, irrelevant, untimely, or incomplete. Accordingly, agency officials need to assess and plan for appropriate privacy measures when implementing smart card systems. To address privacy concerns, officials with the WGA indicated that some participants in the HPP were made aware of the information that would be stored on their cards. Kiosks were installed in some grocery stores to encourage individuals to view the information stored on the cards. Similarly, GSA officials provided employees access to information stored on their headquarters ID cards and said they received few complaints about the cards. While individuals involved in these projects had few concerns, others may require more assurances about the information stored on smart cards and how government agencies will use and share data. GSA, NIST, and other agency officials indicated that security and privacy issues are challenging, because governmentwide policies have not yet been established and widespread use of the technology has not yet occurred. As smart card projects evolve and are used more frequently, especially by citizens, agencies are increasingly likely to need policy guidance to ensure consistent and appropriate implementation. GSA’s efforts to promote smart card technology in the federal government have focused on coordination and contracting-related activities. The agency has taken several useful actions to organize federal smart card managers and coordinate planning for the technology. Its chief contribution has been to make it easier for federal agencies to acquire commercial smart card products by implementing a governmentwide contracting mechanism based on a standard developed in collaboration with NIST and smart card vendors. However, GSA has been less successful in other areas that are also important for promoting adoption of smart cards. For example, officials from other federal agencies indicated that GSA’s effectiveness at demonstrating the technology’s readiness for deployment was limited by its lack of success in implementing smart cards internally or developing a consistent agencywide position on the adoption of smart cards. Further, the agency did not keep its implementation strategy or administrative guidelines up to date. Nor has the agency established standards for the use of smart cards as a component of federal building security processes. Finally, GSA has not developed a framework for evaluating smart card implementations to help agencies reduce risks and contain costs. GSA has advanced federal adoption of smart card technology by addressing many of the major tasks outlined in the 1998 EPIC plan—which called for a standard governmentwide, multipurpose smart card system—and by developing its own smart card plan. In response to OMB’s 1996 tasking that GSA take the lead in promoting federal adoption of smart cards, the agency first established a technology office to support its smart card initiative and work with the President’s Management Council on deploying the technology across government. Beginning in 1998, GSA took steps to address tasks identified in the EPIC plan and its own plan, many of which required the collaboration and support of multiple agencies. For example, GSA worked with the Department of the Navy to establish a technology demonstration center to showcase smart card technology and applications and established a smart card project managers’ group and Government Smart Card Interagency Advisory Board (GSC-IAB). The agency also established an interagency team to plan for uniform federal access procedures, digital signatures, and other transactions, and to develop federal smart card interoperability and security guidelines. GSA’s Office of Governmentwide Policy was similarly established to better coordinate and define governmentwide electronic policies and technology standards in collaboration with other federal agencies and stakeholders. For many federal agencies, GSA’s chief contribution to promoting federal adoption of smart cards was its effort in 2000 to develop a standard contracting vehicle for use by federal agencies in procuring commercial smart card products from vendors. Under the terms of the contract, GSA, NIST, and the contract’s awardees worked together to develop smart card interoperability guidelines—including an architectural model, interface definitions, and standard data elements—that were intended to guarantee that all the products made available through the contract would be capable of working together. Major federal smart card projects, including DOD’s CAC and Transportation’s planned departmentwide smart card, have used or are planning to use the GSA contract vehicle. GSA’s achievements in promoting the federal adoption of smart card technology can be gauged by the progress it has made in addressing tasks laid out in the EPIC plan and its own smart card plan. Table 2, which provides more detailed information on major tasks from the EPIC and GSA plans and their current status, shows that GSA has taken steps to address many of these tasks. Although GSA accomplished many of the tasks for promoting smart card adoption that were planned in 1998, many additional activities essential to advancing the adoption of smart cards across government still need to be addressed. Evolving federal security needs and steady advances in smart card technology mean that federal agency needs likely have changed since 1998. For example, in the wake of the events of September 11, 2001, increased management attention has been paid to security both for access to federal buildings as well as for protecting information systems. At the same time, advances in smart card technology have led to commercial products that are both cheaper and more capable, potentially altering cost/benefit calculations that agencies may have made in the past. Thus far, OMB has not issued any further policy or guidance related to smart card technology, although it was in the process of identifying and examining smart card technology issues at the time of our review. In light of factors that have arisen or changed since GSA’s smart card promotion objectives were set in 1998, we identified the following four specific issues that have not been addressed by GSA: Showing leadership by successfully adopting smart cards internally. A key element of effectively promoting the adoption of a new technology such as smart cards is to demonstrate the technology’s effectiveness in an operational setting by successfully undertaking well-coordinated pilot projects that demonstrate the technology’s benefits. One of the objectives in GSA’s 1998 smart card plan was to lead by example in implementing and showcasing smart cards. Yet GSA’s pilot projects have generally not allowed the agency to lead by example. According to a report completed by GSA’s Office of Inspector General (OIG) in September 2000, there has been “no continued centralized management or direction of GSA’s internal smart card implementation, nor any coordination and monitoring of pilots.” For example, the OIG reported that some of GSA’s projects lacked management support and adequate funding, resulting in delays and partially completed project tasks. In terms of coordination, GSA has been unable to develop and implement a strategy to deploy smart card technology in a standard manner across the agency. For example, two divisions within GSA, the Federal Supply Service and the Public Building Service, while operating in areas where smart cards have a known benefit, did not use GSA’s standard governmentwide contracting vehicle, which requires adherence to the government smart card interoperability specification. In addition, draft guidance on implementing a standard smart-card-based identification system across GSA was not prepared until April 2002 and is still incomplete and unapproved. Officials at three federal agencies, actively engaged in developing their own smart card systems, said that GSA’s internal track record for implementation had raised doubts about its ability to promote smart cards governmentwide. A Department of the Interior official stated that GSA had not been successful in building a business case for smart card adoption, and that, as a result, the Public Building Service was not supporting the Federal Technology Service’s efforts to implement smart card technology at government facilities, causing problems for tenant agencies looking to move to smart-card-based systems. Similarly, a DOD official stated that GSA did not have the expertise to successfully implement smart cards or assist others attempting to do so because it lacked practical experience deploying the technology internally and working collaboratively with different organizations on management and technical issues. Maintaining an up-to-date implementation strategy and smart card guidelines. GSA’s implementation strategy for smart cards consists of the plan it prepared in 1998 as well as the EPIC plan, also developed in 1998. Neither addresses recent issues related to smart card implementation, such as advances in smart card technology or increased federal security concerns since the attacks of September 11, 2001. In 2002, GSA began to survey federal agencies, through the GSC-IAB, on smart card implementation issues they were experiencing. According to GSA officials, the GSC-IAB survey will provide input to the agency that can be used to update its agenda for promoting federal smart card adoption. However, GSA has not yet committed to developing a new planning document with revised objectives and milestones. GSA also has not updated its smart card administrative guidelines since 2000. In October 2000, GSA issued its guidelines for implementing smart cards in federal agencies. GSA developed the guidelines “to provide step-by-step guidance for those agencies wishing to utilize the Smart Identification Card contract vehicle to procure and implement an interoperable employee identification card.” Although the stated purpose of this document was to complement the Smart Identification Card contract, the section discussing standards and specifications does not refer to the government smart card interoperability specification recently developed by GSA and NIST, nor does it provide explicit guidance on using the interoperability specification or other critical technologies, such as contactless cards and biometrics. Coordinating the adoption of standard federal building security processes. GSA has not taken action to develop and coordinate standard procedures for federal building security, which would help agencies implement smart-card-based ID systems in a consistent and effective manner. GSA is responsible for managing security at over 7,300 federal facilities, with widely varying security needs. In 1999, several internal GSA organizations—including the Office of Governmentwide Policy, the Federal Technology Service, the Federal Supply Service, and the Public Building Service—proposed working together to develop a standard approach for federal building security using smart card technology. However, this proposal has not been adopted, nor has any alternative strategy been developed for deploying smart card technology at federal facilities. Officials in the Federal Technology Service and the Public Building Service said that they intended to work together to develop a strategy for smart card use at federal facilities, but they have not yet begun to do so. Although not part of a concerted standards setting process, the Federal Technology Service’s recently launched pilot smart card project could serve in the future as a basis for a federal building security standard. The pilot involved upgrading and standardizing building security systems at three government facilities in Chicago, Illinois. The project is based on smart cards with biometric capabilities to identify employees entering these facilities. At least three federal agencies are expected to participate in the project, and its costs have been estimated to range between $450,000 and $500,000. If the project is successful, it may serve as an example for other federal agencies interested in using smart card technology for their building security processes. Evaluating projects to reduce implementation risks and costs. Although GSA has developed administrative and business case guidelines to help agencies identify smart card benefits and costs, as well as establishing the smart card program managers’ group and the GSC-IAB to discuss project issues, it has not established a framework for evaluating smart card projects to help agencies minimize implementation costs and risks and achieve security improvements. In September 2000, the GSA OIG reported that measurable standards were needed to assess smart card projects and help GSA lead the smart card program. It also suggested that more information and lessons learned from smart card pilot projects were needed to make improvements in the federal smart card program and to better ensure success. GSA agreed with the issues identified by the OIG but has not yet taken action to address recommendations cited in the report. Officials from other agencies indicated that more information is needed on smart card implementation costs and opportunities for cost savings to help agencies make a business case for the technology and to address implementation challenges. According to one agency official, more information sharing is needed on smart card implementation strategies that work and that help reduce project management costs and problems with software and hardware implementation. Agency officials also indicated that measures are needed to determine whether smart cards are working as intended to improve security over federal buildings, computer systems, and critical information, as called for by the President’s Management Agenda and the Office of Homeland Security. GSA officials indicated that many of these issues likely would be addressed by the GSC-IAB at some later date but that no specific milestones for doing so had been set. Progress has been made in implementing smart card technology across government, with increasingly ambitious projects, such as DOD’s CAC, being initiated in recent years as federal managers focus on implementing smart cards to enhance security across organizations. To successfully implement smart-card-based systems, agency managers have faced a number of substantial challenges, including sustaining executive-level commitment, obtaining adequate resources, integrating physical and logical security practices, achieving interoperability among smart card systems, and maintaining system security and privacy of personal information. As both technology and management priorities evolve, these challenges may be becoming less insurmountable, particularly with the increased priority now being placed on heightened security practices to better maintain homeland security. Further, the interoperability challenge may be significantly reduced as continuing efforts are made to increase the scope and usefulness of the government smart card interoperability specification. However, without overall guidance and budgetary direction from OMB, agencies may be unnecessarily reluctant to take advantage of the potential of smart cards to enhance security and other agency operations. Although OMB has statutory responsibility to develop and oversee policies, standards, and guidelines used by agencies for ensuring the security of federal information and systems, it has not issued any guidance or policy on governmentwide adoption of smart cards since 1996, when it designated GSA the lead for promoting federal adoption of the technology. GSA continues to play an important role in assisting agencies as they assess the potential of smart cards and move to implement them. GSA has already provided important technical and management support by developing the Smart Access Common ID contract vehicle, supporting NIST’s development of the government smart card interoperability specification, and setting up the GSC-IAB. However, GSA has not taken all the steps it could have to provide full support to agencies contemplating the adoption of smart cards. Its implementation strategy and administrative guidance have not been kept up to date and do not address current priorities and technological advances. Nor have building security standards been adopted or an evaluation process developed that address implementation of smart card systems. If such tasks were addressed, federal agency IT managers would face fewer risks in deciding how and under what circumstances to implement smart-card-based systems. We recommend that the Director, OMB, issue governmentwide policy guidance regarding adoption of smart cards for secure access to physical and logical assets. In preparing this guidance, OMB should seek input from all federal agencies that may be affected by the guidance, with particular emphasis on agencies with smart card expertise, including GSA, the GSC- IAB, and NIST. We recommend that the Director, NIST, continue to improve and update the government smart card interoperability specification by addressing governmentwide standards for additional technologies—such as contactless cards, biometrics, and optical stripe media—as well as integration with PKI, to ensure broad interoperability among federal agency systems. We recommend that the Administrator, GSA, improve the effectiveness of its promotion of smart card technologies within the federal government by developing an internal implementation strategy with specific goals and milestones to ensure that GSA’s internal organizations support and implement smart card systems, based on internal guidelines drafted in 2002, to provide better service and set an example for other federal agencies; updating its governmentwide implementation strategy and administrative guidance on implementing smart card systems to address current security priorities, including minimum security standards for federal facilities, computer systems, and data across the government; establishing guidelines for federal building security that address the role of smart card technology; and developing a process for conducting ongoing evaluations of the implementation of smart-card-based systems by federal agencies to ensure that lessons learned and best practices are shared across government. We received written comments on a draft of this report from the Secretary of Commerce and DOD’s Deputy Chief Information Officer. We also received oral comments from officials of OMB’s Office of Information and Regulatory Affairs, including the Information Policy and Technology Branch Chief; from the Commissioner of the Immigration and Naturalization Service; from GSA’s Associate Administrator for the Office of Governmentwide Policy; and from officials representing FAA, the Maritime Administration, the Transportation Security Administration, and Chief Information Officer of the Department of Transportation. All the agency officials who commented generally agreed with our findings and recommendations. In addition, Commerce commented that a governmentwide smart card program was needed and that a central activity should be created to manage and fund such an initiative. However, we believe that, with sufficient policy guidance and standards to ensure broad interoperability among agency systems, agencies can effectively develop smart card programs tailored to their individual needs that also meet minimum requirements for governmentwide interoperability. DOD commented that NIST should be tasked with taking the lead in developing and maintaining interoperability standards for smart cards and biometrics. DOD also stressed the importance of biometric technology interoperability with smart cards in support of the adoption of a single set of authenticating credentials for governmentwide use. Finally, DOD also commented that the use of smart card technology for federal building security should be strengthened. We believe our recommendations are consistent with the department’s comments. GSA noted that significant work had gone into developing smart card technology and provided additional details about activities it has undertaken that are related to our recommendations. In addition, each agency provided technical comments, which have been addressed where appropriate in the final report. Unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Ranking Minority Member, Subcommittee on Technology and Procurement Policy, Committee on Government Reform, and other interested congressional committees. We will also send copies to the Director, OMB; the Director, NIST; and the Administrator, GSA. Copies will be made available to others upon request. In addition, this report also will be available at no charge on our home page at http://www.gao.gov. If you have any questions concerning this report, please call me at (202) 512-6240 or send E-mail to koontzl@gao.gov. Other major contributors included Barbara Collier, Jamey Collins, John de Ferrari, Steven Law, Freda Paintsil, and Yvonne Vigil. As part of our review, we examined smart card projects managed by the Departments of Defense (DOD), Interior, Transportation, Treasury, and Veterans Affairs (VA), as well as the Immigration and Naturalization Service (INS) and the Western Governors’ Association (WGA). These projects supported a variety of applications and used or considered smart card technology to improve logical and physical controls over systems and facilities, as well as to store information for other purposes, such as conducting financial transactions. The following provides more information on these projects. Department of Defense In 1999, the Deputy Secretary of Defense issued a policy directive that called for the implementation of a standard smart-card-based identification system for all active duty military personnel, DOD civilian employees, and eligible contractor personnel, to be called the Common Access Card (CAC) program. The directive assigned the Department’s Chief Information Officer overall responsibility to develop departmentwide smart card policy and conduct oversight of the program. Further, the Department of the Navy was made responsible for developing departmentwide interoperability standards for using smart card technology, and the National Security Agency was given the lead for developing a departmentwide public key infrastructure (PKI) program to be integrated with the CAC. In October 2000, Defense began initial rollout with plans to distribute cards to approximately four million individuals across the department by 2003. The CAC is equipped with a 32-kilobyte chip formatted in a standard manner to ensure interoperability among the military services and defense agencies. It also includes a set of PKI credentials, including an encryption key, signing key, and digital certificate. To obtain a CAC, individuals must produce multiple forms of identification. DOD’s PKI-enabled computer systems then examine the digital certificate produced by a user’s card to determine whether the cardholder is granted access to specific DOD systems. DOD is working to adapt its E-mail systems to work with PKI to better ensure that electronic messages are accessible only by designated recipients. In addition, according to DOD, cardholders will be able in the future to electronically sign travel vouchers using the digital certificates on their cards. In the future, DOD plans to add biometrics and other advanced capabilities to the CAC. Biometric data will be stored on the card and could include fingerprints, palm prints, iris scans, or facial features. To store these data, the amount of memory on the card would be doubled from 32 kilobytes to 64 kilobytes. DOD also plans to improve physical security controls over installations and bases by adding a contactless chip to the CAC to avoid delays when military personnel enter facilities. In January 2002, the Department of the Interior’s Bureau of Land Management (BLM) launched a smart card pilot project to help improve security over its sites and employees. The bureau has 164 major sites and approximately 13,000 full- and part-time employees, including contractors. About 1,100 employees were given smart cards for personal identification and to improve safeguards at pilot sites in Nevada and Arizona. The pilot’s goal was to demonstrate the feasibility and interoperability of smart cards and to communicate their potential to employees throughout the bureau. In addition to distributing 1,000 more smart cards to bureau employees by November 2002, the bureau expects to equip about 1,000 of the existing cards with PKI certificates to be used with PKI-enabled software applications to improve security over systems and electronic transactions. According to bureau officials, the project has been a success, and it plans to continue the rollout of smart cards to remaining employees. The bureauwide rollout is scheduled to begin in January 2003. The total estimated cost of the effort is $5.8 million, and according to the bureau’s business case, this effort will break even in 2004. This includes all contracts, labor costs, software, hardware, and maintenance costs over a 5-year life cycle. The full implementation of the smart card system is expected to eliminate redundant administrative processes for personal identification and open up opportunities for additional applications by establishing digital certificates for creating digital signatures. All new and future building locations are planned to be equipped with the smart card technology necessary to pursue this effort, and many existing sites are being upgraded. BLM has reported experiencing a 70 percent drop in the cost of physical access systems since the cards’ initial deployment. In one of the pilot locations, all processes are to be outsourced (except for human resources, physical access, and security officer functions), with bureau employees making all policy and business decisions. The Department of Transportation currently has two large smart card projects targeted for deployment. In the first pilot, the Federal Aviation Administration (FAA) plans to distribute smart cards internally to approximately 10,000 employees and on-site contractor support personnel primarily to secure physical access to the agency’s facilities. Recently, the FAA released a request for proposal outlining minimum requirements for smart card credentials. The agency plans to procure smart cards through the General Services Administration (GSA) Smart Access Common ID contract and will apply GSA’s interoperability specification. The card is planned to be a Java-based hybrid (contact and contactless) card, containing a 32-kilobyte chip as well as a magnetic stripe and barcode. The card will likely also feature a biometric for enhanced authentication (most likely fingerprint data). The second pilot is being managed by the Transportation Security Administration (TSA), which is scheduled to be transferred to the Department of Homeland Security on March 1, 2003. For this pilot, the TSA plans to issue smart identification (ID) cards to up to 15 million “transportation workers”—defined as any persons who require unescorted access to a secure area in any transportation venue. The pilot project will be focused on major airports, seaports, and railroad terminals and will include all modes of transportation. TSA’s goal is to create a standardized, universally recognized and accepted credential for the transportation industry. Initially, the transportation worker ID will be used for obtaining physical access to transportation facilities. Subsequently, a phased approach will be used to add logical access capabilities to the card. According to agency officials, the card will be designed to address a minimum set of requirements, but it will remain flexible to support additional requirements as needed. The card will be used to verify the identity and security level of the cardholder, and local authorities will grant access in accordance with local security policies. TSA has established working groups for various aspects of system development, such as card design, identity documentation requirements, and card policy. To share costs and leverage existing resource investments, TSA is currently working with INS on its entry/exit project to use established land, air, and sea ports as checkpoints. In addition, TSA has established working relationships with industry groups and coordinated with other agencies, such as Treasury and the Federal Bureau of Investigation, and is looking to develop cost sharing strategies for future implementations. TSA’s budget for fiscal year 2003 was not determined at the time of our review, and agency officials said that the availability of funds would determine how quickly the pilot would be implemented. The pilot will likely be implemented within the next 3 years. According to one agency official, the TSA program, if implemented successfully, would likely become the largest civilian agency smart card initiative to date. The Department of the Treasury plans to launch a proof of concept project to assess several smart card technologies for possible agencywide use for both physical and logical access. The project is being funded and managed by Treasury’s Chief Information Officer Council at a cost of $2.8 million. Six Treasury organizations are participating in the pilot: the Secret Service; the Internal Revenue Service; the Bureau of Alcohol, Tobacco and Firearms; the Bureau of Engraving and Printing; the Federal Law Enforcement Training Center; and the main department. The Secret Service has been designated the lead bureau and will also lead the future departmentwide smart card project. In total, Treasury plans to issue about 10,000 smart cards. These cards are to be Java-based devices with 32 kilobytes of storage, capable of supporting multiple technologies for use in various configurations. For example, the cards will support both contact and contactless access, although not all will contain biometrics. All the cards are expected to contain PKI certificates for creating digital signatures and encrypting E-mail messages. The cards are also expected to be equipped with two-dimensional barcodes and a magnetic stripe to enable integration with existing systems. Like DOD, Treasury plans to allocate space on the card for individual bureaus to use in creating their own applications, such as the Federal Law Enforcement Training Center’s plan to use the cards when issuing uniforms to students. A Treasury official believes that using smart cards will simplify certain processes, such as property and inventory management, that are currently paper-based and labor-intensive. Information from this proof of concept project will be used to launch an agencywide smart card project. GSA’s Smart Access Common ID Contract and interoperability guidelines will be used to ensure that appropriate smart card technologies are evaluated. The proof of concept is expected to last about 6 months, with the pilot ending in January 2003. At that time, a report will be completed, and a business case for an agencywide smart card solution will likely be prepared. Preliminary cost estimates for implementing a Treasury-wide smart card system, which would support around 160,000 employees, is in the range of $50 to $60 million. In April 2001, the Department of Veterans Affairs (VA) began issuing cards for its VA Express Registration Card pilot project. Initiated in 1999, the project was to provide agency customers with a smart card carrying medical and personal information that could be used to speed up registration at VA hospitals. The card was also intended to be usable by non-VA hospitals equipped with the necessary readers to access patients’ VA benefits information. At the time of our review, about 24,000 smart cards had been issued through two VA hospitals located in Milwaukee, Wisconsin, and Iron Mountain, Michigan. The cards are PKI enabled and can also be used throughout VA’s network of hospitals—the majority of which do not have smart card readers—because they include all the same patient information found printed on the front of the older Veteran Identification Cards, which are still in use. The PKI capabilities of the card allow patients with a home computer and card reader to securely access their information on-line and digitally sign forms, saving time and offering convenience for both the patient and the agency. For those without Internet access, kiosks were installed at the two pilot locations, allowing Express Card holders to access their information, make any necessary changes, or request PKI certificates. The VA Express Card program used GSA’s Smart Access Common ID contract for procurement and technical assistance. According to agency officials, using the Express Card reduced registration time at hospitals by 45 minutes. Patients involved in the pilot project had access to express registration services, thus saving time. However, although the Express Card program is still in use, VA officials have decided not to expand beyond the two pilot locations. The reasons given were the expense of back-end automation, complications integrating the new system with legacy systems, and the lack of an existing card reader infrastructure at other VA hospitals. The agency maintains card management, support, and issuance capabilities at the pilot locations to support the smart cards that are still in use. The Department of Justice’s INS currently has a card-based project under way to control access at the nation’s borders. The project includes two types of cards—Permanent Resident Cards and Border Crossing Cards (also known as “Laser Visas”). As part of the Border Crossing Cards project, INS is working with the Department of State to produce and distribute the cards. Under the Illegal Immigration Reform and Immigrant Responsibility Act of 1996, every Border Crossing Card issued after October 1, 2001, is required to contain a biometric identifier and be machine readable. The Laser Visas will store biographical information along with a photograph of the cardholder and an image of the cardholder’s fingerprints. The Permanent Resident Cards will store similar information. Information from the Laser Visas is stored in a central INS database. As of June 2002, more than five million Laser Visas and approximately six million Permanent Resident Cards had been issued. The Permanent Resident Card and Laser Visa make use of optical stripe technology, with storage capacity ranging from 1.1 megabyte to 2.8 megabytes, to store large amounts of information, but they do not contain integrated circuit chips to process data. As part of a cost-benefit analysis conducted in 1999, INS considered implementing chip-based smart cards and determined that smart card technology was not the best solution. This decision was based, in part, on the limited storage capacity of smart cards at the time. INS examined smart cards with 8 kilobytes of memory, which did not provide enough memory to store the fingerprint data required by law. Smart cards now have a storage capacity of up to 64 kilobytes and are capable of storing color photo images of individuals as well as full fingerprint images. In June 1999, WGA launched the Health Passport Project (HPP) in three states—Nevada, North Dakota, and Wyoming—to evaluate and test a range of applications and technologies based on a common smart card platform. The project was to be conducted within an 18-month demonstration period and be integrated with other state-administered prenatal, physician care, nutrition, and early childhood education programs. Each state was expected to maintain common demographic information as well as clinical data on individuals participating in the pilot project. Selected sites also tested unique applications related to electronic benefits transfer (EBT), insurance eligibility, and health appointment information. WGA had overall responsibility for managing the HPP contract, and each state was responsible for providing on-site management, technical support, and funding as needed. The Departments of Agriculture and Health and Human Services also provided project funding and support, with GSA providing technical assistance as requested. The HPP initiative involved the distribution of 2,348 cards to individuals in Bismarck, North Dakota; 991 cards in Cheyenne, Wyoming; and 8,459 cards in Reno, Nevada. With additional state funding, the HPP initiative has continued to operate beyond the demonstration period, which concluded in December 2001. The HPP platform consists of smart cards, special card readers attached to health providers’ personal computers, card readers installed at grocery or retail establishments and register systems, servers to maintain backup databases, kiosks, and a network. The health passport card contains an 8-kilobyte chip, storing demographic, health, and benefit information on participants as well as a magnetic stripe for Medicaid eligibility information. Smart card readers are used to read and write information to the card. These devices are linked to HPP workstations and to the Women, Infants, and Children EBT application, which allows benefits to be stored on the card and used at grocery and retail establishments that have card readers installed at point-of-sale register locations. Kiosks are free-standing machines that operate by a touch screen feature and read information stored on the card. In December 2001, the Urban Institute and the Maximus consulting firm prepared a report for WGA, which reviewed the results of the HPP initiative. The report stated that HPP was successful in bringing a concept to life. HPP enabled participants to use the EBT and healthcare appointment and immunization information more effectively and conveniently, because information was stored on the card. Project participants also liked using the cards and kiosks to access their personal information, and many liked being able to electronically track appointments and health care records. In addition, retailers liked the cards and the ability to track EBT data more accurately. WGA officials further noted that HPP has helped federal and state governments maintain more accurate information on EBT distributions and baby formula purchases, which can be used to request coupon rebates from manufacturers. More accurate sales information is available and shared with manufacturers to resolve disputes over rebates and to obtain more timely refunds. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
Smart cards--credit-card-like devices that use integrated circuit chips to store and process data--offer a range of potential uses for the federal government, particularly in increasing security for its many physical and information assets. GAO was asked to review the use of smart cards across the federal government (including identifying potential challenges), as well as the effectiveness of the General Services Administration (GSA) in promoting government adoption of smart card technologies. Progress has been made in implementing smart card technology across government. As of November 2002, 18 federal agencies had reported initiating a total of 62 smart card projects. These projects have provided a range of benefits and services, ranging from verifying the identity of people accessing buildings and computer systems to tracking immunization records. To successfully implement such systems, agency managers have faced a number of substantial challenges: (1) sustaining executive-level commitment in the face of organizational resistance and cost concerns; (2) obtaining adequate resources for projects that can require extensive modifications to technical infrastructures and software; (3) integrating security practices across agencies, a task requiring collaboration among separate and dissimilar internal organizations; (4) achieving smart card interoperability across the government; and (5) maintaining the security of smart card systems and privacy of personal information. In helping agencies to overcome these challenges, not only GSA but also the Office of Management and Budget (OMB) and the National Institute of Standards and Technology (NIST) have roles to play. As the federal government's designated promoter of smart card technology, GSA assists agencies in assessing the potential of smart cards and in implementation. Although GSA has helped agencies significantly by implementing a governmentwide, standards-based contracting vehicle, it has not kept guidance up to date and has not addressed important subjects, such as building security standards, in its guidance. Further, OMB, which is responsible for setting policies for ensuring the security of federal information and systems, has not issued governmentwide policy on adoption of smart cards. In its role of setting technical standards, NIST is responsible for the government smart card interoperability specification, which does not yet address significant emerging technologies. Updated guidance, policy, and standards would help agencies to take advantage of the potential of smart cards to enhance security and other agency operations.
SROs are responsible for the surveillance of the trading activity on their markets. Market transactions take place on electronic or floor-based platforms. SROs employ electronic surveillance systems to monitor market participants’ compliance with SRO rules and federal securities laws. Electronic surveillance systems are programmed to review trading and other data for aberrational trading patterns or scenarios within defined parameters. Also, SROs review trading as a result of complaints from the public, members, and member firms and as a result of required notifications, such as those concerning offerings. One of the key surveillance systems employed by SROs monitors the markets for insider trading. We discuss SRO surveillance systems and investigatory procedures related to insider trading in more detail in appendix II. SRO staff review alerts generated by the electronic surveillance systems to identify those that warrant further investigation. When SROs find evidence of potential violations of securities laws or SRO rules involving their members, they can conduct disciplinary hearings and impose penalties. These penalties can range from disciplinary letters to the imposition of monetary fines to expulsion from trading and SRO membership. SROs do not have jurisdiction over entities and individuals that are not part of their membership, and, as such, any suspected violations on the part of nonmembers are referred directly to Enforcement. SROs maintain records of their investigations and the resulting disciplinary actions as part of their internal case tracking systems. In addition, as part of their market surveillance efforts, SROs, such as NASD and NYSE, maintain databases with information on individuals and firms associated with suspicious trading activity, such as insider trading. NASD also maintains the Central Registration Depository, the securities industry online registration and licensing database. This database makes complaint and disciplinary information about registered brokers and securities firms available to the public and, in more detailed form, to SEC, other securities regulators, and law enforcement authorities. OCIE administers SEC’s nationwide examination and inspection program. Within OCIE, the Office of Market Oversight primarily focuses on issues related to securities trading activities, with the objective of evaluating whether SRO enforcement programs and procedures are adequate for providing surveillance of the markets, investigating potential violations, and disciplining violators under SRO jurisdiction. OCIE also inspects other SRO regulatory programs, which include, among others, arbitration, listings, sales practice, and financial and operational programs. As part of the latter, OCIE coordinates the compliance inspections of NASD’s district offices, which are responsible for examining broker-dealer members for compliance with SRO rules and federal securities laws. In cases where OCIE discovers potentially egregious violations of federal securities laws or SRO rules during an SRO inspection, it may refer the case to Enforcement, which is responsible for further investigating these potential violations; recommending Commission action when appropriate, either in a federal court or before an administrative law judge (ALJ); and negotiating settlements. SEC’s Market Regulation administers and executes the agency’s programs relating to the structure and operation of the securities markets, which include regulation of SROs and review of their proposed rule changes. SEC has delegated authority to Market Regulation for other aspects of SRO rulemaking as well, including the authority to publish notices of proposed rule changes and to approve proposed rule changes. OCIE conducts both routine and special inspections of SRO regulatory programs as part of its oversight efforts. We found that the SRO inspection process generally includes a planning phase, an on-site review of SRO programs, and a written report to the SRO documenting inspection findings and recommendations that is reviewed and approved by the Commission. OCIE typically staffs inspections with a lead attorney and from 2 to 6 other staff, who also work concurrently on at least 1 other SRO inspection. The number of staff dedicated to SRO inspections has fluctuated in recent years, but as of September 2007 totaled 46. According to OCIE officials, inspections of SRO enforcement programs are intended to assess the design and operation of SRO enforcement programs to determine if they effectively fulfill SRO regulatory responsibilities. As part of these inspections, OCIE takes steps to assess SRO surveillance systems, reviews SRO policies and procedures for investigating potential violations and disciplining violators of rules and laws, and reviews samples of SRO case files to determine whether SRO staff were complying with the policies and procedures. As part of its SRO oversight responsibilities, OCIE conducts both routine and special inspections of SRO regulatory programs. At regular intervals, OCIE conducts routine inspections of key regulatory programs, such as SRO enforcement, arbitration, examination, and listings programs. The inspection cycles are based on the size of the SRO market and the type of regulatory program, with key programs of larger SROs, such as NYSE and NASD, being inspected from every 1 to 2 years, and smaller regional SROs from every 3 to 4 years. Inspection of enforcement programs typically include a review of SRO surveillance programs for identifying potential violations of trading rules or laws, investigating those potential violations, and disciplining those who violate the rule or law. While sometimes OCIE conducts a comprehensive review of these programs, especially at the smaller SROs, often these inspections focus on a specific aspect of the programs, such as fixed income. We discuss OCIE’s process for targeting their routine inspections later in this report. OCIE also conducts special inspections of SRO regulatory programs, as warranted. Special inspections typically originate from a tip or need to follow up on past inspection findings and recommendations. Special inspections also can include sweep inspections, where OCIE probes specific activities of all SROs or a sample of them to identify emerging compliance issues. According to OCIE officials, some aspect of every SRO is generally examined every year through a routine examination of a specific regulatory program or through a special inspection. OCIE’s inspection process for SROs generally includes a planning phase, an on-site review and analysis, and a final inspection report to the SRO (see fig. 1). During inspection planning, OCIE identifies the SRO program to be inspected and assigns staff who conduct initial research on the program, prepare materials for each individual inspection on the basis of the inspection’s focus, and draft a planning memorandum. In preparation for the on-site inspection, OCIE typically sends an initial document request to the SRO, asking for general program information such as organizational charts and copies of SRO policies and procedures or, if OCIE is reviewing a surveillance program, logs of alerts and the resulting investigations. We discuss OCIE’s review of enforcement programs in more detail later in this section. After reviewing the documents provided, staff plan the on-site phase of the inspection, which can include additional requests for specific documents, such as case files, to be made available for review while on- site. OCIE staff typically spends 1 week on-site interviewing SRO staff and reviewing SRO case files and other documentation. After the on-site visit, OCIE staff continue their analysis in the home office; conduct follow-up interviews or request additional documentation, as needed; and begin drafting the inspection report. Staff present their initial inspection findings and recommendations to the SRO in an exit interview and incorporate initial SRO responses into the draft inspection report. Once the report is drafted, staff then circulate it to other interested SEC divisions and offices—such as the Office of General Counsel, Market Regulation, or Enforcement—for their review and comment, and then submit the report to the Commission for review. Following Commission consideration and authorization, staff issue a nonpublic report to the SRO and request that the SRO respond in writing within a specified time frame, typically 30 days. According to OCIE officials, they staff SRO inspections with a lead attorney and from 2 to 6 other staff reporting to an OCIE branch chief. These individuals are typically staffed concurrently on at least 1 other SRO inspection. As shown in table 1, as of September 2007, the SRO inspection group consisted of 46 staff, including 14 managers, 29 examiners, and 3 other support staff. Of the 32 examiners and support staff, 16 are dedicated to market oversight inspections. Table 1 shows that between fiscal years 2002 and 2005, SRO inspection staffing increased from 36 to 62, or 72 percent. OCIE staff said that this increase was largely due to the increase in funding SEC received as a result of the Sarbanes-Oxley Act of 2002. Since then, SRO inspection staffing has declined from 62 to 46, or 26 percent, which OCIE officials attributed to staff attrition and the inability of OCIE to hire replacements during a SEC- wide hiring freeze that occurred from May 2005 to October 2006. OCIE officials stated that despite the decrease in staff numbers, they have continued to conduct routine inspections on schedule, although the inspections may last longer than usual. Also, they said that they have not been able to do as many special inspections as they otherwise would have conducted. OCIE officials told us that the SRO inspection group recently received 6 additional professional staff positions, which it is now in the process of filling. According to OCIE officials, inspections of SRO enforcement programs are intended to assess the design and operation of SRO enforcement programs to determine whether they effectively identify violations, enforce compliance among members, and follow their own procedures. More specifically, OCIE officials said that when inspecting SRO surveillance programs, their objectives are to determine whether (1) the parameters of SRO electronic surveillance systems are appropriately designed to generate exceptions that identify potential instances of noncompliance with SRO rules and federal securities laws and (2) the systems are effectively detecting such activity. When reviewing SRO surveillance systems, OCIE begins by asking the SRO for copies of the exchange rules that it is required to enforce, a description of the coding behind the surveillance systems designed to monitor the markets for compliance with these rules, and logs of the alerts that these systems generated. OCIE staff then review this information to determine whether the system is appropriately designed to identify noncompliance and whether it is functioning as designed. For example, as part of one inspection, OCIE staff found that the parameters of a specific surveillance system were too restrictive, after observing that the system did not generate any alerts over the inspection period. Conversely, OCIE staff said that if in reviewing a surveillance system, the inspection team saw that the system generated 10,000 alerts every quarter, they would follow up with the SRO to determine whether the indications of numerous rule violations were plausible or whether the parameters of the system were set appropriately. Either way, they said that the inspection team would dedicate resources to looking at that system. Similarly, when evaluating SRO programs for investigating potential violations of SRO rules or federal securities laws and disciplining broker- dealer members, OCIE officials stated that their objective is to determine whether (1) SRO policies and procedures are appropriately designed to uncover violations of SRO rules and federal securities laws and to administer the appropriate disciplinary measures and (2) the SRO is complying with these policies and procedures. OCIE staff first request copies of the relevant policies and procedures for investigating surveillance alerts and for disciplining members found to be in violation of SRO rules and federal securities laws. They also ask for lists of the resulting investigations and enforcement actions, including referrals on nonmembers to SEC. OCIE staff then analyze this information to assess the extent to which SRO policies and procedures direct the SRO staff to conduct thorough reviews and investigations and, when applicable, to take appropriate disciplinary action. For example, during a recently completed sweep inspection of SRO surveillance and investigative programs related to insider trading, OCIE evaluated related SRO policies and procedures for reviewing alerts and opening investigations to determine whether they directed staff to coordinate appropriately with other SROs. We discuss the results of this sweep inspection—including a plan that the options SROs submitted to SEC to create a more uniform and coordinated method for the regulation, surveillance, investigation, and detection of insider trading—in appendix II. As part of another inspection we reviewed, OCIE found that an SRO had not yet developed formal procedures for its analysts to review alerts that were generated by a recently implemented surveillance system. OCIE recommended that the SRO develop such procedures. When reviewing SRO enforcement programs, OCIE also assesses whether the SRO is in compliance with its own policies and procedures. To accomplish this objective, OCIE staff select and review case files pertaining to a sample of alerts, investigations, and disciplinary files from the lists that they have asked the SRO to generate. OCIE staff said when reviewing these files, they pay particular attention to the strength of the evidence upon which the SRO analyst relied in determining whether to close an alert or an investigation or to refer the case to SRO enforcement, SEC, or other appropriate regulators. In this way, OCIE staff said they can evaluate whether the SRO is enforcing its rules and federal securities laws consistently among its members and, in the case of certain federal laws such as those prohibiting insider-trading, between members and nonmembers. For example, in one inspection we reviewed, OCIE found that the SRO used its informal disciplinary measures inappropriately when disciplining its members, and recommended that formal disciplinary actions be taken when informal actions had already occurred. OCIE inspections may result in recommendations to SROs that are intended to address any deficiencies identified and to improve SRO effectiveness. OCIE officials said that for SRO enforcement programs, they tend to make recommendations flexible enough to allow SROs to implement them in a manner that best fits their unique business models and surveillance systems. As we have previously discussed, if OCIE finds serious deficiencies at an SRO, it can refer the case to Enforcement. Such referrals are relatively infrequent—between January 1995 and September 2007, SEC brought and settled 10 enforcement actions against SROs (see app. III). According to OCIE officials, recommendation follow-up is primarily the responsibility of the examination team, under the supervision of the assistant director assigned to the inspection. Inspection follow-up begins with evaluating written responses by SROs to the inspection report and obtaining documentation of SRO efforts to address the recommendations, and can continue for several years, depending on the complexity of the recommendation. For example, OCIE officials said that some recommendations, such as those that involve the design and implementation of new information technology, may require continued dialogue with the SRO over several years before the recommendation is fully implemented. OCIE also may follow up on inspection recommendations during a subsequent inspection of the SRO. OCIE officials said that in the event the SRO does not take steps to address a recommendation that staff believe is critical, they can elevate the matter to OCIE management or the Commission, although they said that this happens infrequently. We discuss the tracking of inspection recommendations later in this report. We identified several opportunities for OCIE and Market Regulation to enhance their oversight of SROs by developing formal guidance, leveraging the work of SRO internal audit functions, and enhancing information systems. First, although OCIE has developed a general process for inspecting SRO enforcement programs, it has not developed an examination manual or other formal guidance for examiners to use when conducting inspections, as it has for examinations of other market participants. Such guidance could help OCIE ensure that its inspection procedures and products are subject to uniform standards and quality controls. Second, OCIE has recently expanded the use of the SRO internal and external audit reports while on-site at the SRO; however, OCIE does not leverage this work in the planning process, which could result in duplication of effort and missed opportunities to better target inspection resources. Third, in accordance with SEC policy, Market Regulation regularly inspects SRO IT systems related to market operations for adequate security controls and reviews related to SRO internal audit reports. However, this review does not target SRO enforcement-related databases, which contain investigative and disciplinary information that SROs maintain and upon which other regulators rely. Finally, OCIE currently does not formally track the implementation status of inspection recommendations, which ranged as high as 29 in the inspections that we reviewed. The lack of formal tracking may reduce OCIE’s ability to efficiently and effectively generate and evaluate trend information, such as patterns in the types of deficiencies found or the implementation status of recommendations across SROs, or over time. Our interviews with OCIE officials and reviews of selected inspection workpapers indicated that OCIE examiners typically follow a general process when conducting reviews of SRO enforcement programs. This process begins with examination planning, is followed by data gathering, and ends with reporting. However, OCIE has not developed an examination manual or other formal guidance for its examiners to use when conducting inspections of SRO enforcement programs. According to OCIE officials, because SRO rules and corresponding surveillance systems are unique and constantly evolving, it would be difficult to develop a detailed inspection manual that could be tailored to all SROs and also remain current. These officials said that an examination manual is not necessary to ensure consistency among SRO inspections because the SRO inspection group is a relatively small group within OCIE, and all of the staff are centralized in headquarters. On the other hand, they said that because OCIE’s inspection program for investment companies, investment advisers, and broker- dealers has hundreds of examiners across SEC headquarters and its regional offices who are responsible for examining thousands of firms, OCIE has developed detailed inspection manuals to ensure consistency across examinations of these firms. Similarly, OCIE officials said that they have developed guidelines for SRO examiners conducting oversight inspections of NASD’s district offices because OCIE relies on examination staff in the SEC regional offices to assist them in conducting these inspections. In contrast to OCIE, federal banking regulators, such as the Federal Reserve and OCC, have developed written guidance for the examination of large banks—also highly complex and diverse institutions—that outlines the objectives of the program and describes the processes and functional approaches used to meet those objectives. By not establishing written guidance for conducting inspections of SRO enforcement and other regulatory programs, OCIE may be limiting its ability to ensure that its inspection processes and products are subject to basic quality controls in such areas as examination planning, data collection, and report review. For example, in several of the inspections we reviewed, we did not find evidence of supervisory review, which is a key aspect of inspection quality control. According to OCIE officials, the team leader is expected to review the work of team members. However, without written policies and procedures specifying how and when this review is to be conducted and documented, it is difficult to establish whether the team leaders comply with this quality control. According to inspection standards developed by the IG community, each organization that conducts inspections should develop and implement written policies and procedures for internal controls over its inspection processes to provide reasonable assurance over conformance with organizational policies and procedures. As another example, when conducting inspections of SRO enforcement programs, OCIE officials said that team leaders often require their teams to use data collection instruments, such as checklists, when reviewing SRO files to ensure a consistent and complete review of all of the files selected, particularly when there are inexperienced staff on the team. While potentially an effective means of collecting data, according to OCIE officials, the decision to use these tools is up to the individual team leader, and not all teams employ them. According to IG inspection standards, evidence developed under an effective system of internal controls generally is more reliable than evidence obtained where such controls are lacking. By not establishing standards addressing quality controls in data collection, OCIE’s ability to ensure the consistency and reliability of data collected across its SRO inspection teams may be limited. Furthermore, without written guidelines, new examiners lack a reference tool that could facilitate their orientation in the inspection program. While OCIE employs a risk-based approach to conducting SRO inspections, OCIE’s risk-assessment and inspection planning processes do not incorporate information gathered through SRO internal audits. According to OCIE officials, OCIE tailors inspections of SRO programs (particularly at the two largest SROs) to focus on those areas judged to pose the greatest risk to the SRO or the general market. In determining which areas present the highest risk, OCIE officials said they consider such factors as the amount of time that has passed since a particular area was last inspected, the size of the area, the results of past inspections, and consultations with other SEC offices and divisions. For example, because the enforcement programs at NASD and NYSE encompass hundreds of surveillance systems, OCIE officials said examiners cannot review all systems as part of one inspection. As a result, OCIE officials said examiners first conduct a preliminary analysis of requested documents and focus inspection resources on those systems or areas that are judged to pose the greatest risk. According to OCIE officials, because the regional SROs have smaller programs, OCIE staff typically are able to conduct a more comprehensive review of the entire enforcement program during a single inspection. We previously recommended that OCIE develop and implement a policy requiring examiners to routinely use SRO internal review reports in planning and conducting SRO inspections. Prior to October 2006, OCIE’s practice was to request SRO internal audit reports only when OCIE believed specific problems existed at an SRO. In October 2006, OCIE issued a memorandum broadening the circumstances in which OCIE would request and use these reports. The memorandum directs examiners to request that SROs make all internal audit reports related to the program area under inspection available for the staff’s on-site review, including workpapers or any reviews conducted by any regulatory quality review unit of the SRO or an outside auditor. According to the memorandum, on-site review of these reports may be useful in determining whether the SRO has identified particular areas of concern in a program area and adequately addressed those problems, assessing whether an SRO addressed prior inspection findings and recommendations, and helping staff determine whether they should limit or expand their review of particular issues during an inspection. OCIE staff said that in fiscal year 2008, they also plan to begin reviewing the internal audit functions of SROs, with the goal of determining whether SRO internal audit functions are effective. For example, OCIE officials said that they plan to evaluate whether the internal audit functions are independent of SRO management, conduct thorough reviews of all relevant areas (particularly, regulatory programs), and have sufficient staffing levels. OCIE officials said that as part of their reviews, they also plan to assess the quality and reliability of SRO internal audit reports and assess whether SROs have implemented the recommendations resulting from these reports. OCIE officials told us that they are in the planning phase of this review, and, as such, they have not yet developed written guidance for their examiners in conducting these reviews. While OCIE’s October 2006 memorandum broadened the use of SRO internal audit reports to encompass on-site reviews during inspections, it did not address the use of internal audit reports for planning purposes, as we had recommended. In contrast, the risk assessments of large banks that federal bank examiners conduct during the planning phase are based, in part, on internal audit reports, and examiners may adjust their examination plans to avoid duplication of effort and minimize burden to the bank. For example, according to examination guidance that the Federal Reserve issued, to avoid duplication of effort and burden to the institution, examiners may consider using these workpapers and conclusions to the extent that examiners test the work performed by the internal or external auditors and determine it is reliable. Similarly, examination guidance issued by OCC states that examiners’ assessments of a bank’s audit and control functions help leverage OCC resources, establish the scope of current and future supervisory activities, and assess the quality of risk management. By not considering the work and work products of SRO internal audit functions in its inspection planning process, OCIE examiners may be duplicating SRO efforts, causing regulatory burden, or missing opportunities to direct examination resources to other higher-risk or less- examined program areas. For example, our previous work, which focused on the listing programs of SROs, showed that SRO internal audit functions had examined or were in the process of examining aspects of their listing programs that OCIE had covered in its most recent inspections, and that resulting reports could be useful to OCIE in planning as well as conducting inspections. As OCIE begins to assess the quality of SRO internal audit functions and work products, the opportunity exists for OCIE to further leverage these products in targeting its own inspection efforts. OCIE officials said that as part of their upcoming reviews of SRO internal audit functions, they will assess whether SRO internal audit products may be helpful in assisting them in targeting inspections of particular SRO functions. OCIE could also further leverage the work performed by SRO internal and external auditors to monitor a particular regulatory program between inspections. In our review of OCIE inspections of NASD and NYSE enforcement programs, as many as 8 years passed between inspections of a particular surveillance system and related investigations and disciplinary actions. Moreover, as OCIE officials noted, the recent decline in SRO inspection staff has lengthened the time it takes to complete a routine SRO inspection and limited their ability to conduct additional special inspections. Unless OCIE regularly informed itself of the results of SRO efforts to review these systems, it may not know of emerging or resurgent issues until the next inspection. As we have previously discussed, SROs conduct surveillance of trading activity on their markets; carry out investigations; and bring disciplinary proceedings involving their own members or, when appropriate, make referrals to SEC when the suspicious activity involves nonmembers. However, SEC’s Market Regulation does not obtain information on the security of SRO enforcement-related databases—IT applications for storing data about SRO investigations and disciplinary actions taken against SRO members—when conducting reviews of IT security at SROs. Under SEC’s Automation Review Policy (ARP), Market Regulation conducts on-site reviews of SRO trading systems, information dissemination systems, clearance and settlement systems, and electronic communications networks and makes recommendations for improvements when necessary. Market Regulation also conducts reviews of SRO general and application controls over the collection of fees under section 31 of the Securities Exchange Act of 1934. These are IT systems designated for remitting fees to SEC as part of the section 31 program, which ensures that the data produced by these systems are authorized, and completely and accurately processed and reported. Market Regulation officials said that they do not target enforcement-related databases for specific review, since the ARP policy statement is specifically intended to oversee systems essential to market operations. These officials said that Market Regulation could include a review of the security of enforcement-related databases both in their general assessments of SRO IT infrastructure security within the ARP and in section 31 reviews. They explained that both of these reviews include testing of components and evaluations of general access controls and changes made within SRO organizationwide network structures in their routine reviews of specific IT programs and systems, such as SRO computer operations, security assessments, internal and external audit IT coverage, and systems outage notification procedures and systems change notifications. However, these general assessments by Market Regulation would not necessarily provide SEC with information on potential risks specific to the security of the data contained in enforcement-related databases. NASD and NYSE officials told us that they conduct their own regular internal inspections of security of IT systems, which include reviews of enforcement-related databases. In addition, both SROs contract with external companies that regularly conduct reviews of the security controls of their technology systems. We reviewed several of these internal and external audits, which include reviews of SRO enforcement-related systems and databases conducted from fiscal years 2002 through 2006. These reviews generally concluded that NASD and NYSE have adequate controls in place to protect sensitive enforcement-related data. The internal and external audit reports of NYSE and NASD that we reviewed showed that these reports could be a valuable source of information for Market Regulation on specific risks to enforcement-related databases. Market Regulation officials said that in conducting ARP-related inspections, they review SRO internal and external audit reports related to the infrastructure of SRO IT systems; however, they do not specifically look for information related to the assessment of security of enforcement- related databases. In addition, SEC staff said that although they generally receive all the internal and external audit reports done of SRO systems relating to trading and clearing functions, they may not always receive such reports relating to other systems, including enforcement-related databases, from all SROs. Since SROs, SEC, and other regulators rely on the accuracy and integrity of the data in SRO enforcement-related databases in fulfilling their own regulatory responsibilities, protecting this information from unauthorized access is critical to regulatory efforts. For example, as we discuss later in this report, SEC uses SRO surveillance data in carrying out its own enforcement efforts related to securities trading. Furthermore, SROs are responsible for maintaining complaint and disciplinary data on their members—information that is essential for identifying recidivists. By not periodically obtaining information to ensure that the SRO risk-assessment process and SRO-sponsored audits continue to be included in SRO assessment cycles and that the audits are comprehensive and complete, Market Regulation cannot assess whether SROs have taken the appropriate steps to ensure the security of sensitive enforcement-related information, or the level of risk that a data breach could pose. Although OCIE officials said that they have worked with SROs to address the intent of recent inspection recommendations, we were not able to readily verify the status of the recommendations in the inspections we reviewed because OCIE does not formally track inspection recommendations or the status of their implementation. OCIE officials said that when OCIE management is interested in obtaining an update on the recommendations resulting from an inspection, they consult directly with the examination team assigned to the SRO inspection. OCIE officials also said that they do not consider the lack of a formal tracking system to have affected their ability to manage any follow-up of inspection recommendations because there are relatively few SROs, and OCIE staff is in frequent contact with them. OCIE’s informal methods for tracking inspection recommendations contrast with the expectations set by federal internal control standards for ensuring that management has relevant, reliable, and timely information regarding key agency activities. These standards state that key information on agency operations should be recorded and communicated to management and others within the entity and within a time frame that enables management to carry out its internal control and other responsibilities. Without a formal tracking system, the ability of OCIE management to effectively and efficiently monitor the implementation of SRO inspection recommendations and conduct programwide analyses may be limited. Of the 11 inspections of NASD and NYSE enforcement programs we reviewed, the number of recommendations OCIE made ranged from 4 to 29, with an average of 11. They also ranged in complexity, from asking the SRO to update its policies and procedures to recommending that an SRO implement an entire surveillance program. For example, we observed recommendations calling for, among other things, improving case file documentation, changing the parameters of a surveillance system, implementing an automated tracking system, and improving SRO member education. OCIE officials said that some inspections resulted in as many as 30 or 40 recommendations. Without a formal tracking system, OCIE management must rely on staff’s availability and ability to recall recommendation-related information, which may be reliable when discussing an individual inspection, but may limit OCIE management’s ability to efficiently generate and evaluate trend information, such as patterns in the types of deficiencies found or the implementation status of recommendations across SROs, or over time. Implementing a formal tracking system would not only allow management to more robustly assess the recommendations to SROs and their progress in implementing them, but would allow it to develop performance measures that could assist management in evaluating the effectiveness of its inspection program. According to OCIE and SEC’s OIT officials, OCIE recently began working with OIT to develop a new examination tracking system that will include the capability to track SRO responses and implementation status of OCIE recommendations. OCIE officials said that planned requirements for the system includes a field to enter the recommendation, a field for OCIE inspectors to broadly categorize the status of its implementation, and a text box for inspectors to elaborate on the recommendation and its implementation status. OCIE officials also said that they expect that the system will be able to trace the history of a recommendation. OIT officials told us that they are developing separate software that will allow OCIE to generate management reports using data from the tracking systems as well as other database; however, the requirements for any management reports OCIE would receive have yet to be determined. According to an OCIE official, the recommendation tracking system and reporting capabilities may be an effective way to provide OCIE management with a high-level characterization of implementation status. OCIE officials said that in response to our concerns, they plan to deploy an interim, stand-alone recommendation tracking system that will provide a management report, in the form of a spreadsheet, that contains all open recommendations to SROs resulting from SRO inspections and the current status of SRO efforts to implement them. These officials said that they expect to use this spreadsheet until the previously described OIT projects are implemented in 2008. Enforcement receives advisories and referrals, which undergo multiple stages of review and may lead to opening an investigation, through an electronic system in OMS. After opening investigations, Enforcement further reviews the evidence gathered to decide whether to pursue civil or administrative actions, or both. From fiscal years 2003 to 2006, OMS received an increasing number of advisories and referrals from SROs, such as NYSE and NASD, most of which involved insider trading. However, limited search capabilities of the SRO system and the lack of a link between the SRO and case activity tracking systems have limited Enforcement staff’s ability to electronically search advisory and referral information, monitor unusual market activity, make decisions about opening matters under inquiry (MUI) and investigations, and assess case activities. Upon receipt of SRO information in its Web-based SRO Referral Receipt System (SRO system), OMS makes initial decisions on referrals and forwards selected referral materials to investigative attorneys. After initial reviews by OMS staff, Enforcement may decide to open investigations if it determines evidence garnered during its inquiry period warrants doing so and staff and financial resources are available. If investigation evidence merits, staff may pursue administrative or civil actions and seek remedies, such as cease-and-desist orders and civil monetary penalties. The referral process begins when OMS staff receive SRO advisories and referrals on unusual market activity through a secure Web-based electronic system called the SRO system. SEC officials noted that SRO referrals help SEC identify and respond to unusual market activity by those who are not members of SROs, investigate those suspected of potentially illegal behavior, and take action when the circumstances of cases and evidence are appropriate. OMS branch chiefs, who are responsible for reviewing advisories and referrals, access the SRO system on a weekly basis to review all SRO-submitted advisories and referrals. SRO advisories and referrals usually consist of a short form with basic background information on the suspected unusual market activity by SRO nonmembers that includes the name of the security issuer, date of the unusual activity, and a description of the market activity identified by the SRO. The materials also contain a text attachment, which includes more detailed narrative information, such as a chronology of unusual activity and specific information about issuers and individuals potentially associated with that activity. SEC does not receive information electronically or otherwise on unusual market activity by SRO members or related investigations by SROs of the unusual member activity. After reading advisories and referrals, OMS branch chiefs use SEC’s National Relationship Search Index, an electronic system that connects to and works with a range of other SEC systems, such as the Case Activity Tracking System (CATS), to determine whether existing SEC investigations involve the issuer noted in the SRO advisory or referral. If an investigation already exists that involves the issuer noted in the advisory or referral, the branch chiefs will forward the advisory or referral to the Enforcement attorney conducting that investigation for review and incorporation into his or her case. If Enforcement has not already opened an investigation on a particular issuer, OMS staff store advisories in the SRO system, but do not investigate them because they do not contain information as detailed as that found in referrals in the SRO system. However, SROs may continue their market surveillance efforts on an advisory, further develop information on the unusual market activity, and submit all information later as a referral for potential action by SEC. For referrals, branch chiefs apply criteria—such as (1) the nature of the unusual market activity, (2) the persons involved and their employment positions, (3) the dollar value of the unusual activity in question, (4) potential harm to the financial markets and individual investors, and (5) any other information branch chiefs may have obtained through conversations with SRO staff—to make initial decisions about the merit of forwarding the referrals to Enforcement management and attorneys for possible SEC investigation. Enforcement associate directors review and either approve or disapprove branch chiefs’ recommendations about the referrals. Referrals not recommended by branch chiefs for approval are stored in the SRO system and may be accessed as needed. If approved, OMS branch chiefs open an MUI, a 60-day initial inquiry period, and electronically forward all referral information to SEC headquarters or the appropriate regional office, where investigative attorneys and management have up to 60 days to review all available case information and consider staff and financial resources to decide whether to proceed with a full investigation. Once the MUI has been opened, Enforcement staff assigns the MUI a CATS case number, and staff use CATS to track all components of the case until it is closed. Figure 2 outlines SEC’s process and average time frames for receiving, processing, and investigating unusual market activity identified by SROs. Enforcement staff at headquarters or the regional offices use criteria that are similar to those used by OMS staff during their initial review, but also consider the level of financial resources available for investigations and the availability of Enforcement staff to determine whether to close the MUI or open an investigation. If Enforcement staff do not open an investigation, the MUI is closed in CATS and staff document the reason(s) for closure, which may include insufficient evidence, resource limitations, or a newly opened case being merged with an existing case. If the Enforcement Division develops evidence it deems sufficient for moving forward, SEC may institute civil or administrative enforcement actions, or both. When determining how to proceed, Enforcement staff consider such factors as the seriousness of the wrongdoing, the technical nature of the matter under investigation, and the type of sanction or relief sought. When the misconduct warrants it, SEC will bring both types of proceedings. With civil actions, SEC files a complaint with a federal district court that describes the misconduct, identifies the laws and rules violated, and identifies the sanction or remedial action that is sought. For example, SEC often seeks civil monetary penalties and the return of illegal profits, known as disgorgement. The courts also may bar or suspend an individual from serving as a corporate officer or director (see fig. 2). SEC can seek a variety of sanctions through administrative enforcement proceedings as well. An ALJ, who is independent of SEC, presides over a hearing and considers the evidence presented by the Enforcement staff as well as any evidence submitted by the subject of the proceeding. Following the hearing, the ALJ issues an initial decision, which contains a recommended sanction. Administrative sanctions or outcomes include cease-and-desist orders, suspension or revocation of broker-dealer and investment adviser registration, censures, bars from association with certain persons or entities in the securities industry, payment of civil monetary penalties, and return of illegal profits. Both Enforcement staff and the defendant may appeal all or any portion of the initial decision to SEC Commissioners, who may affirm the decision of the ALJ, reverse the decision, or remand it for additional hearings. An SRO may also agree to undertake other remedial actions in a settlement agreement with SEC. Once civil or administrative proceedings have concluded and all outcomes are finalized, SEC closes the investigation and terminates the case in CATS. Figure 2 also provides data on the durations involved with referral and investigation processes and shows that stages of the process—from SRO identification of unusual market activity to the closure of investigations— vary in their duration. We analyzed data SEC provided from its referral and case tracking systems from fiscal years 2003 to 2006. For those cases for which the data had open and close dates for the investigation stage of the process, it took an average of 726 days or almost 2 years from the point that SROs identify unusual market activity and send SEC referrals to the time that SEC completely investigates and concludes cases. Of this total time, it took, on average, 192 days for the first three steps in the process, which include SROs identifying unusual market activity and referring it to SEC and SEC opening an MUI to conduct its initial inquiry on referrals. It took, on average, another 534 days for SEC to investigate that unusual market activity; institute administrative or civil enforcement proceedings; administer outcomes, such as issuing and collecting fines; and completely close investigations. Data we reviewed from SEC’s SRO system and CATS showed that the number of advisories, referrals, and investigations significantly increased from fiscal years 2003 through 2006. More specifically, advisories increased from 5 in fiscal year 2003 to 190 in fiscal year 2006 and totaled 390 for the period. Of the 4-year total, 354, or 91 percent, were insider trading advisories, and an additional 3 percent involved market manipulation issues. Data from SEC’s SRO system on 1,640 referrals showed that the number of referrals SEC received from SROs grew from 438 in fiscal year 2003 to 514 in fiscal year 2006, an increase of 17 percent. Of the total number of referrals, almost 80 percent involved suspected insider trading activities. In addition, NYSE and NASD submitted 1,095, or almost 70 percent, of the total number of referrals. SEC and SRO officials attributed the increase to more merger and acquisition activity in the marketplace. Data SEC provided to us from its case tracking system showed a corresponding increase in the number of investigations SEC opened from SRO referrals over the same period. The number of investigations rose from 82 in fiscal year 2003 to 208 in fiscal year 2006, an increase of 154 percent. Case actions, which follow SEC’s determination of whether to file a case as an administrative proceeding or a civil action, also increased. The number of case actions rose from 2 in fiscal year 2003 to 29 in fiscal year 2006. SEC actions result in case outcomes such as permanent injunctions, preliminary injunctions, restraining orders, administrative proceeding orders, and emergency actions. These case outcomes rose from 3 in fiscal year 2003 to 82 in fiscal year 2006. Case outcomes also may include “relief,” such as disgorgement, payment of prejudgment interest and other monetary penalties, asset freezes, and officer and director bans. For example, in 2003, NYSE referred unusual market activity to SEC after suspecting potential insider trading activity. After opening an MUI and investigating the activity, the case resulted in an administrative proceeding and a civil action. The case resulted in a range of outcomes against 6 individuals. The administrative proceeding specifically resulted in an order barring individuals alleged in the case from associating with one another on trading. The civil action resulted in permanent injunctions to stop the suspected use of material, nonpublic information and in financial penalties that included disgorgement. Figure 3 illustrates the upward trend in the numbers of advisories, referrals, MUIs, investigations, case actions, and case outcomes for the period we reviewed. The figure also shows that more than three quarters of the referrals were made for insider trading. Market manipulation and “other” activity, including activity associated with issuer reporting and financial disclosure and initial securities offerings, constituted the other major categories of referrals. Appendix IV provides additional data on these trends by fiscal year. SEC’s SRO system featured limited capability to electronically search information on advisories and referrals and may limit Enforcement staff’s ability to efficiently monitor unusual market activity, make subsequent decisions about opening MUIs and investigations, and manage the SRO advisory and referral process. As we have previously discussed, federal internal control standards state that management needs relevant, reliable, and timely communications relating to internal and external events. In addition, these standards state that the information should be distributed in a form and time frame that permits management and others who need it to perform their duties efficiently. SEC developed the SRO system to receive and store advisory and referral information from SROs and enable SEC staff to make initial decisions about which SRO-identified market activities to investigate. The system primarily receives information on unusual market activity based on SRO surveillance of trades among stock issuers. This information includes the name of the security issuer; the date of the unusual activity; and a description of the type of activity, among other data. The SRO system also stores narrative attachments, which the SROs provide to SEC, that contain additional information about individuals or entities, such as investment advisers or hedge funds, associated with unusual market activity. While the system allows OMS staff to search by issuer, the narrative information cannot be easily searched in the system; instead, the attachments must be individually opened and read. An Enforcement branch chief noted that narrative information can help establish patterns of behavior that are critical when SEC tries to investigate potentially fraudulent activity, such as market manipulation and insider trading. Furthermore, only OMS branch chiefs have access to the SRO system, so attorneys who need that information have to consult with OMS branch chiefs or contact SRO staff directly, rather than access that information electronically. In addition, since the referral receipt and case tracking systems are not linked, management is unable to readily assess the efficiency and effectiveness of the referral and investigation processes. For example, SEC is unable to extract information from a single source on how long it takes both SROs and SEC to work through different stages of cases over time, from referral receipt (SRO system) to opening MUIs and conducting investigations (case tracking system). SEC headquarters and regional office officials noted that receiving information in a timely manner is critical to the investigative steps of assembling the facts of the case and collecting evidence on those potentially involved with unusual market activity. To obtain this information and customized reports and statistics on Enforcement operations, division officials said they must submit requests to SEC’s OIT and then wait for OIT staff to respond to the request. As noted in our 2007 report on Enforcement Division operations, these requests may take several days to 1 week to complete. Having recognized system limitations, SEC officials have undertaken efforts to make improvements to CATS by developing a new case information management system called the Hub. However, these planned improvements do not address limitations of the SRO system and do not include expanded linkages between the SRO system and CATS. SEC’s oversight of SRO enforcement programs has produced positive outcomes. For example, in response to an OCIE recommendation, SROs in the options market have developed a new surveillance authority, which is intended to improve coordination among SROs in monitoring the markets for insider trading and investigating any resulting alerts. The equities markets are expected to soon follow with a similar plan. SEC, through its Enforcement Division, has worked with SROs to detect and respond to potential securities laws violations. Between fiscal years 2003 and 2006, SEC responded to an increasing number of SRO referrals—a large percentage of which are related to insider trading—with an increasing number of investigations and enforcement actions. SEC has started to incorporate the results of SRO internal audits into its on-site inspections, which helps to leverage resources. In addition, the agency plans to expand its oversight of SRO functions to include reviews of the internal audit function—with an emphasis on independence, staffing levels, and scope of coverage. Such reviews could help ensure that SROs are effectively assessing risks, instituting appropriate controls, and carrying out their responsibilities. However, several opportunities exist to enhance the efforts used by SEC to oversee SROs and, particularly, their enforcement programs. Specifically, OCIE examiners are conducting inspections of SRO enforcement programs without formal guidance. Although our review of a sample of inspections found that examiners have developed a methodology for reviewing SRO enforcement programs, the lack of written guidance—which establishes minimum standards and quality controls—could limit OCIE’s ability to provide reasonable assurances that its inspection processes and products are subject to basic quality controls in such areas as examination planning, data collection, and report review. Moreover, the lack of formal guidance could result in individual inspection teams creating data collection and other examination tools that otherwise would be centralized and more efficiently shared across inspection teams. Furthermore, OCIE’s recent internal guidance on the use of SRO internal audit-related reports does not address the use of these reports for risk- assessment and inspection planning purposes, as we have previously recommended. We continue to believe that the use of these reports when conducting risk assessments and determining the scope of an upcoming inspection could allow OCIE to better leverage its inspection resources, especially if OCIE determines that the reports produced by SRO internal audit functions are reliable. As OCIE officials noted, they plan to begin assessing SRO internal audit functions in 2008, including the quality and reliability of their work products, although they have not yet developed guidance for inspection staff on conducting these reviews. By not considering the work and work products of the SRO internal audit function in its inspection planning process, OCIE may be duplicating SRO efforts and not maximizing the use of its limited resources. OCIE also may be missing an opportunity to better monitor the effectiveness of the SRO regulatory programs (including enforcement programs) between inspections. SEC also has an opportunity to leverage the work of SRO internal audit functions in its assessment of information security at SROs. Since ARP Policy Statements specifically are intended to oversee systems essential to market operations, Market Regulation officials do not target enforcement- related databases for specific review. Although SROs have assessed the security controls of these databases, Market Regulation officials have little knowledge of the content or comprehensiveness of these audits. As a result, Market Regulation cannot determine whether SROs have taken the appropriate steps to ensure the security of this sensitive information. Market Regulation could facilitate this evaluation by making certain that enforcement-related databases continue to be periodically reviewed by SROs, and that these reviews are comprehensive and complete. Both OCIE and Enforcement could benefit from improvements to information technology systems when overseeing SROs. OCIE currently lacks a system that tracks the status of inspection recommendations. OCIE officials told us that a new examination tracking database is in development that will allow OCIE to track the implementation of inspection recommendations as well as software that will allow OCIE to generate management reports from this database. By ensuring these system capabilities, OCIE management could improve its ability to monitor the implementation of OCIE recommendations, and begin developing measures for assessing the effectiveness of its program. Finally, while SEC has responded to a significant increase in SRO referrals between fiscal years 2003 and 2006, Enforcement’s systems for receiving referrals and tracking the resulting investigations have limited capabilities for searching and analyzing information related to these referrals. Enforcement is currently working to address some limitations in its case tracking system; however, this effort does not include making improvements to the separate system used to receive and manage SRO referrals. By including system improvements to allow electronic access to all of the information contained in advisories and referrals submitted by SROs, generate management reports, and provide links to the case tracking system, Enforcement could enhance its ability to efficiently and effectively manage SRO advisories and referrals and conduct analyses that could contribute to improved SEC planning, operations, and oversight. To enhance SEC oversight of SROs, we recommend that the SEC Chairman take the following three actions: establish a written framework for conducting inspections of SRO enforcement programs to help ensure a reliable and consistent source of information on SRO inspection processes, minimum standards, and quality controls; and, as part of this framework, broaden current guidance to SRO inspection staff on the use of SRO internal audit reports to direct examiners to consider the extent to which they will rely on reports and reviews of internal and external audit and other risk- management systems when planning SRO inspections; ensure that Market Regulation makes certain that SROs include in their periodic risk assessment of their IT systems a review of the security of their enforcement-related databases, and that Market Regulation reviews the comprehensiveness and completeness of the related SRO- sponsored audits of their enforcement-related databases; and as part of the agency’s ongoing efforts to improve information ensure that any software developed for tracking SRO inspections includes the ability to track and report SRO responses to and implementation status of OCIE inspections recommendations and consider system improvements that would allow Enforcement staff to electronically access and search all information in advisories and referrals submitted by SROs and generate reports that would facilitate monitoring and analysis of trend information and case activities. We requested comments on a draft of this report from SEC. SEC provided written comments on the draft, which we have reprinted in appendix V. SEC also provided technical comments on a draft of the report, which were incorporated in this report as appropriate. In its written comments, SEC agreed with our recommendations. SEC noted that OCIE will provide SRO inspectors with written guidance on its risk-scoping techniques and compiled summary of inspection practices. In addition, OCIE plans to assess the quality and reliability of SRO internal audit programs and determine whether, and the degree to which, inspections can be risk- focused on the basis of SRO internal audit work. SEC also noted that it is developing a database to track the status of SRO inspection recommendations and provide management reports and that this enhancement should create additional efficiencies for inspection planning purposes. SEC’s Market Regulation will implement our recommendation to ensure that enforcement-related databases continue to be periodically reviewed by SRO internal audit programs, and that these reviews are comprehensive and complete. Furthermore, Enforcement plans to consider recommended system improvements to more effectively manage the advisory and referral processes. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to interested congressional committees and the Chairman of the Senate Committee on Finance. We will also send a copy to the Chairman of the Securities and Exchange Commission. We will also make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-8678 or hillmanr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To discuss the overall structure of the Securities and Exchange Commission’s (SEC) inspection program—more specifically, its approach to inspections of self-regulatory organizations’ (SRO) surveillance, investigative, and enforcement programs (enforcement programs)—we reviewed and analyzed documentation of all 11 inspections that SEC’s Office of Compliance Inspections and Examinations (OCIE) completed from March 2002 through January 2007 of enforcement programs related to the former NASD and the New York Stock Exchange (NYSE). We also reviewed and analyzed an OCIE memorandum to the Commission describing the SRO inspection process, staffing data provided by OCIE, and our prior work. Furthermore, we observed a demonstration of various information technology systems that NASD used to monitor the markets and track investigations and disciplinary actions. Finally, we reviewed and summarized the enforcement actions brought by SEC against SROs from 1995 to 2007. We also conducted interviews with staff from OCIE, NASD, and NYSE. To evaluate certain aspects of SEC’s inspection program, including guidance and planning, the use of SRO internal audit products, and the tracking of inspection recommendations, we reviewed OCIE inspection guidance related to the review of NASD district offices and SRO internal audit reports, guidance for bank examiners from the Board of Governors of the Federal Reserve System and the Office of the Comptroller of the Currency, inspection guidelines developed by the inspectors general, and our prior work. In addition, we reviewed SEC guidance for conducting reviews of SRO information technology (IT) related to market trading operations and regulatory fee remittance, and NASD and NYSE internal and external audits of IT security. Furthermore, we reviewed internal control standards for the federal government and conducted interviews with officials from OCIE and SEC’s Division of Enforcement (Enforcement) on their respective procedures for ensuring that SROs implement inspection recommendations and remedial actions required as part of enforcement actions. We also conducted interviews with staff from OCIE, SEC’s Division of Market Regulation and Office of Information Technology, NASD, and NYSE. To describe the SRO referral process and recent trends in referral numbers and related SEC investigations, and evaluate SEC’s information system for advisories and referrals, we observed a demonstration from Enforcement staff on the capabilities of their IT systems, analyzed data from SEC’s SRO Referral Receipt System (SRO system) and Case Activity Tracking System (CATS), and interviewed Enforcement, NASD, and NYSE staff to determine how SEC manages the processes for receiving SRO referrals and conducting subsequent investigations. In particular, to understand trends in SRO advisories, referrals, and subsequent SEC investigations, we requested and analyzed data from SEC’s referral and case tracking systems from fiscal years 2003 through 2006. We analyzed the data to provide descriptive information on the number of SEC’s advisories, referrals, matters under inquiry (MUI), investigations, actions, and case outcomes during the period. We also analyzed these data by manually merging records from the SRO system and CATS to obtain descriptive data on the amount of time it takes SROs to identify unusual market activity and convey that information to SEC, as well as how long it takes SEC to respond by opening MUIs and investigations and achieving case outcomes. We inquired about checks SEC performs on the data and deemed the data reliable for the purposes of addressing our objectives. When calculating the average duration of stages to process SRO referrals, we distinguished between case stages that featured both open and close dates and those that were open or active as of the date we received data from SEC, and we reported duration information accordingly. In addition, to calculate case stage durations, we consulted with SEC and SRO staff to distinguish between initial and updated referrals and performed duration calculations using initial referrals only to avoid double counting that could skew the average duration results. We performed our work in Washington, D.C.; New York, New York; and Rockville, Maryland, between September 2006 and September 2007 in accordance with generally accepted government auditing standards. SRO surveillance, investigative, and disciplinary programs are designed to enforce SRO rules and federal securities laws related to insider trading— the buying or selling of a security by someone who has access to material, nonpublic information about the security—and are subject to SEC oversight through periodic inspections by OCIE. In January 2007, OCIE completed a sweep inspection (a probe of specific activities across all or a sample of SROs) of SRO enforcement programs related to insider trading. As a result of OCIE’s inspection, the options SROs submitted a plan to SEC to create a more uniform and coordinated method for surveillance and investigation of insider trading in the options markets, and the equities SROs indicated their intent to submit a similar plan. From fiscal years 2003 through 2006, SEC significantly increased the number of investigations that related to insider trading. SROs employ enforcement programs to enforce SRO rules and federal securities laws related to insider trading. Insider trading is illegal because any trading that is based on this information is unfair to investors who do not have access to the information. When persons buy or sell securities on the basis of information not generally available to the public, investor confidence in market fairness can be eroded. Information that could be exploited for personal gain by insiders include such things as advance knowledge of mergers or acquisitions, development of a new drug or product, or earnings announcements. While company insiders (e.g., directors and senior executives) may be the most likely individuals to possess material, nonpublic information, others outside of the company also may gain access to the information and use it for their personal gain. For example, employees at a copy store who discovered material, nonpublic information while making presentation booklets for a firm could commit insider trading if they traded on that information prior to it being made public. To detect insider trading, SROs have established electronic surveillance systems that monitor their markets for aberrational movements in a stock’s price or volume of shares traded, among other things, and generate alerts if a stock’s price or volume of shares traded moves outside of set parameters. These systems link trade activity data to news and research about corporate transactions (such as mergers, acquisitions, or earnings announcements); public databases of listed company officers and directors; and other internal and external sources of information to detect possible insider trading. For example, the NASD Securities Observation News Analysis and Regulation system combines trade activity on NASDAQ, the American Stock Exchange, and the over-the-counter markets with news stories and other external sources of information to detect potential instances of insider trading and other potential violations of federal securities laws or NASD rules. SRO staff review the thousands of alerts generated by the electronic surveillance systems annually to identify those that are most likely to involve insider trading or fraud and warrant further investigation. In conducting reviews of these alerts, SRO staff consider such factors as the materiality of news, the existence of any previous news announcements, and the profit potential. If, in reviewing the trading associated with the alert, SRO staff determines there is a strong likelihood of insider trading, they can expand this review to a full investigation. In the course of a full investigation, SROs gather information from their member broker-dealers and the issuer of the traded stock to determine whether there is any relationship between those individuals who traded the stock and those individuals who had advance knowledge of the transaction or event. For example, SRO staff will typically request from their member broker-dealers the names of individuals and organizations that traded in advance of a corporate transaction or event, a process known as bluesheeting. These data are then cross-referenced with information the SRO staff obtain from the issuer of the stock, including a chronology of the events leading up to the corporate transaction or event and the names of individuals who had knowledge of inside information. SROs have created technology-based tools to assist in the identification of potential repeat offenders. For example, SROs can compare their blue sheets to a database called the Unusual Activity File (UAF), which includes data on suspicious trading activity identified by all SROs that are part of the Intermarket Surveillance Group, to help identify persons or entities that have been flagged in prior referrals or cases related to insider trading, fraud, or market manipulation. Some SROs have also developed other databases for their internal use. For example, NASD developed a database similar to the UAF for suspicious trading activity it has identified. NYSE also has developed a database of individuals who are affiliated with entities that it considers at high risk for insider trading. When SROs find evidence of insider trading involving their members, they can conduct disciplinary hearings and impose penalties ranging from disciplinary letters to fines to expulsion from trading and SRO membership. Because SROs do not have jurisdiction over entities and individuals that are not part of their membership, they refer suspicious trading on the part of nonmembers directly to Enforcement. Although Enforcement staff do not have direct access to SRO surveillance data or recidivist databases like the UAF, several staff told us they are able to obtain any needed information from the SRO analysts who made the referrals. Data we reviewed from NASD and NYSE between fiscal years 2003 and 2006 showed that the SROs referred significantly more nonmembers to SEC for suspected insider trading than they referred members internally to their own Enforcement staff. According to SRO staff, this may be because the majority of the entities and individuals who trade on the basis of material, nonpublic information do so as a result of connections to the issuers of the stocks traded, rather than the investment advisor role that would involve member firms and their employees. Another possible explanation, according to SRO staff, is that the individual registered persons (SRO members) typically conceal their misconduct by trading in nominee accounts or secretly sharing in the profits generated by nonregistered persons involved in the scheme. As a result, they said that concealed member misconduct is often exposed through evidence developed by SEC using its broader jurisdictional tools after the SRO has referred a nonmember to SEC. For example, they said that SEC can expose the concealed member misconduct by fully investigating the nonregistered person’s activities through documents such as telephone and bank records obtained by subpoena. SEC also has the ability to issue subpoenas to nonmembers to appear for investigative testimony. OCIE assesses the effectiveness of SRO regulatory programs, including enforcement programs, through periodic inspections. OCIE officials said that when evaluating SRO enforcement programs related to insider trading, their objective is to assess whether the parameters of the surveillance systems are appropriately set to detect abnormal movements in a stocks’ price or volume and generate an alert, the extent to which SRO policies and procedures direct the SRO staff to conduct thorough reviews of alerts and resulting investigations, and the extent to which SRO analysts comply with these policies and procedures and apply them consistently. OCIE staff said that when reviewing case files, one of their priorities is to assess the evidence upon which the SRO analyst relied when deciding to terminate the review of an alert or investigation. For example, they said that they will assess whether the analyst selected an appropriate period to review trading records (because suspicious trades may have occurred several days or weeks prior to the material news announcement), whether the analyst reviewed the UAF and internal databases for evidence of recidivism, and whether the analyst appropriately reviewed any other stocks or entities related to the trading alert. OCIE officials said that in light of the recent increase in merger and acquisition activity and the increased potential for insider trading, SROs are making greater efforts to detect attempts of individuals or firms to benefit on both sides of a merger or acquisition. For example, they said that where previously it was common for one SRO analyst to investigate any alerts generated from the movement of the target firm and for a different analyst to investigate any alerts generated from the movement of the acquiring firm—making it difficult to identify an account or individual that may have traded on both sides of the acquisition—SRO policies now generally require one analyst to review and investigate both stocks involved in a merger or acquisition. Generally speaking, mergers and acquisitions present opportunities for insider trading because the acquiring company generally must pay more per share than the current price, causing the target firm’s stock price to increase. In this case, an individual with knowledge of an upcoming acquisition could purchase the target’s stock prior to the announcement and then sell for a gain the stock after the announcement at the higher price. An individual also could sell any holdings or sell short the stock of the acquiring firm if the individual believed that the acquiring firm’s stock price would decrease after the announcement. Finally, an individual could attempt to buy the target firm and sell (or short sell) the acquiring firm in an attempt to benefit on both sides of an acquisition. In January 2007, OCIE completed sweep inspections of surveillance and investigatory programs related to insider trading at 10 SROs. As a result of its inspections, OCIE identified opportunities for improved coordination and standardization among SROs in monitoring and investigating possible insider trading. OCIE found that because each SRO at the time maintained its own surveillance systems, the variances in the system parameters could result in the possibility that stock or option movements might generate an alert at one SRO but not another. Furthermore, OCIE found that because each SRO was responsible for monitoring every stock that traded on its market, the SROs were duplicating the initial screening of alerts. As a result of OCIE’s then ongoing inspection, the options SROs submitted a plan to SEC to create a more uniform and coordinated method for the regulation, surveillance, investigation, and detection of insider trading in the options markets. SEC approved the plan, called Options Regulatory Surveillance Authority (ORSA), in June 2006. The plan allows the options SROs to delegate part or all of the responsibility of conducting insider trading surveillance and investigations for all options trades to one or more SROs, with individual SROs remaining responsible for the regulation of their respective markets and retaining responsibility to bring disciplinary proceedings as appropriate. ORSA has currently delegated this surveillance and investigative responsibility to the Chicago Board Options Exchange. The ORSA plan also provides for the establishment of a policy committee that is responsible for overseeing the operation of the plan and for making all relevant policy decisions, including reviewing and approving surveillance standards and other parameters to be used by the SRO performing the surveillance and investigative functions under the plan. The committee also will establish guidelines for generating, reviewing, and closing insider trading alerts; specific and detailed instructions on how analysts should review alerts; and instructions on closing procedures, including proper documentation and rationale for closing an alert. OCIE officials stated that they have met regularly with the options SROs to monitor the implementation of the plan and the development of related policies and procedures. According to the Commission, the ORSA plan should allow the options exchanges to more efficiently implement surveillance programs for the detection of insider trading, while eliminating redundant effort. As a result, OCIE officials believe the plan will promote more effective regulation and surveillance. According to OCIE officials, the equities SROs are currently drafting a similar plan for coordinating insider trading surveillance in equities markets. However, instead of designating one SRO to conduct all insider trading-related surveillance, OCIE officials said that the current draft proposal would require each listing market, or its designee, to conduct insider trading surveillance for its listed issues, regardless of where trading in the security occurred. This includes reviewing alerts, pursuing investigations, and resolving cases through referrals (to SEC) or disciplinary action. OCIE officials said that the equities SROs anticipate voting on a proposed plan at the October 2007 Intermarket Surveillance Group meeting and to submit the plan to SEC by the end of 2007. Pursuant to sections 19 and 21 of the Securities Exchange Act of 1934, SEC may bring enforcement actions against an SRO either in federal court or through an administrative proceeding if it has found that an SRO has violated or is unable to comply with the provisions of the act and related rules and regulations, or if it has failed to enforce member compliance with SRO rules without reasonable justification or excuse. The act authorizes SEC to seek a variety of sanctions in an administrative proceeding, including the revocation of SRO registration, issuance a cease-and-desist order, or censure. An SRO may also agree to undertake other remedial actions in a settlement agreement with SEC. In addition to the remedies available in administrative enforcement action, a district court in a civil enforcement action may impose civil monetary penalties and has discretion to fashion such other equitable remedy it deems appropriate under the circumstances. Tables 2 through 11 summarize the 10 civil enforcement actions SEC brought against SROs from January 1995 through September 2007. For this report, we have included only those findings and terms of settlement related to SRO surveillance, investigative, or disciplinary programs (enforcement programs). As such, these summaries do not necessarily identify all findings and terms of the settlement agreements. Tables 12 to 22 include analyses of data from fiscal years 2003 to 2006 provided by SEC from its SRO system and CATS. This appendix provides specific analyses on the number and types of advisories; referrals; matters under inquiry (MUI); investigations; case actions; and case outcomes, by fiscal year and SRO. It also describes reasons that SEC closed MUIs and provides data on average and median investigation durations, by type of investigation. In addition to the contact named above, Karen Tremba (Assistant Director), Nina Horowitz, Stefanie Jonkman, Matthew Keeler, Marc Molino, Omyra Ramsingh, Barbara Roesmann, and Steve Ruszczyk made key contributions to this report.
Self-regulatory organizations (SRO) are exchanges and associations that operate and govern the markets, and that are subject to oversight by the Securities and Exchange Commission (SEC). Among other things, SROs monitor the markets, investigate and discipline members involved in improper trading, and make referrals to SEC regarding suspicious trades by nonmembers. For industry self-regulation to function effectively, SEC must ensure that SROs are fulfilling their regulatory responsibilities. This report (1) discusses the structure of SEC's inspection program for SROs, (2) evaluates certain aspects of SEC's inspection program, and (3) describes the SRO referral process and evaluates SEC's information system for receiving SRO referrals. To address these objectives, GAO reviewed SEC inspection workpapers, analyzed SEC data on SRO referrals and related investigations, and interviewed SEC and SRO officials. To help ensure that SROs are fulfilling their regulatory responsibilities, SEC's Office of Compliance Inspections and Examinations (OCIE) conducts routine and special inspections of SRO regulatory programs. OCIE conducts routine inspections of key programs every 1 to 4 years, inspecting larger SROs more frequently, and conducts special inspections (which arise from tips or the need to follow up on prior recommendations or enforcement actions) as warranted. More specifically, OCIE's inspections of SRO surveillance, investigative, and disciplinary programs (enforcement programs) involve evaluating the parameters of surveillance systems, reviewing the adequacy of policies and procedures for handling the resulting alerts and investigations, and reviewing case files to determine whether SRO staff are complying with its policies and procedures. GAO identified several opportunities for SEC to enhance its oversight of SROs through its inspection program. First, although examiners have developed processes for inspecting SRO enforcement programs, OCIE has not documented these processes or established written policies relating to internal controls over these processes, such as supervisory review or standards for data collection. Such documentation could strengthen OCIE's ability to provide reasonable assurances that its inspection processes and products are subject to key quality controls. Second, OCIE officials said that they focus inspections of SRO enforcement programs on areas judged to be high risk. However, this risk-assessment process does not leverage the reviews that SRO internal and external auditors performed, which could result in duplication of SRO efforts or missed opportunities to direct examination resources to other higher-risk or less-examined programs. OCIE officials told us that they plan to begin assessing SRO internal audit functions in 2008, including the quality of their work products, which would allow OCIE to assess the usefulness of these products for targeting its inspections. Finally, OCIE currently does not formally track the implementation status of SRO inspection recommendations; rather, management consults with staff to obtain such information as needed. Without formal tracking, OCIE's ability to efficiently and effectively generate and evaluate trend information, such as patterns in the types of deficiencies found or the implementation status of recommendations across SROs, or over time, may be limited. SEC's Division of Enforcement uses an electronic system to receive referrals of potential violations from SROs. These referrals undergo multiple stages of review and may lead Enforcement to open an investigation. From fiscal years 2003 to 2006, SEC received an increasing number of advisories and referrals from SROs, many of which involved insider trading. However, SEC's referral receipt and case tracking systems do not allow Enforcement staff to electronically search all advisory and referral information, which may limit SEC's ability to monitor unusual market activity, make decisions about opening investigations, and allow management to assess case activities, among other things.
Labor required states to implement major provisions of WIA by July 1, 2000, although some states began implementing provisions of WIA as early as July 1999. Services provided under WIA represent a marked change from those provided under the previous program, allowing for a greater array of services to the general public. WIA requires that many federal programs provide employment and training services through one-stop centers. WIA is designed to provide for greater accountability than under previous law: it established new performance measures, a requirement to use Unemployment Insurance (UI) wage data to track and report on outcomes, and a requirement to conduct at least one multi-site control group evaluation. When WIA was enacted in 1998, it replaced the Job Training Partnership Act (JTPA) programs for economically disadvantaged adults and youth and for dislocated workers with three programs—WIA Adult, Dislocated Worker, and Youth—that provide a broader range of services to the general public, no longer using income to determine eligibility for all program services. WIA programs provide for three tiers, or levels, of service for adults and dislocated workers: core, intensive, and training. Core services include basic services such as job searches and labor market information. These activities may be self-service or require some staff assistance. Intensive services include such activities as comprehensive assessment and case management—activities that require greater staff involvement. Training services include such activities as occupational skills or on-the-job training. Labor’s guidance provides for monitoring and tracking for the adult and dislocated worker programs to begin when job seekers receive core services that require significant staff assistance. WIA excludes job seekers who receive core services that are self-service and informational in nature from being included in the performance measures. WIA’s youth program does not have three tiers of services, but instead requires that 10 youth services, referred to as program elements, be made available to all eligible youth. All youth who are determined eligible and receive WIA services are included in the performance measures. WIA is designed to provide for greater accountability than its predecessor program by establishing new performance measures, a new requirement to use UI wage data to track and report on outcomes, and a requirement for Labor to conduct at least one multi-site control group evaluation. According to Labor, performance data collected from the states in support of the measures are intended to be comparable across states in order to maintain objectivity in determining incentives and sanctions. The performance measures also provide information to support Labor’s performance goals under the Government Performance and Results Act (GPRA), the budget formulation process using the Office of Management and Budget’s (OMB) Program Assessment Rating Tool (PART), and for program evaluation required under WIA. In contrast to JTPA, for which data on outcomes were obtained through follow-ups with job seekers, WIA requires states to use UI wage records to track employment-related outcomes. Each state maintains UI wage records to support the process of providing unemployment compensation to unemployed workers. The records are compiled from data submitted to the state each quarter by employers and primarily include information on the total amount of income earned during that quarter by each of their employees. Although UI wage records contain basic wage information for about 94 percent of workers, certain employment categories are excluded, such as self-employed persons, independent contractors, federal employees, and military personnel. According to Labor’s guidance, if a program participant does not appear in the UI wage records, states may use supplemental data sources, such as follow-up with participants and employers, or other administrative databases, such as U.S. Office of Personnel Management or U.S. Department of Defense records, to track most of the employment-related measures. However, only UI wage records may be used to calculate earnings change and earnings replacement. (See table 1 for a complete list of the WIA performance measures and data sources used for tracking the measures.) Unlike JTPA, which established expected performance levels using a computer model, WIA requires states to negotiate with Labor to establish expected performance levels for each measure. States, in turn, must negotiate performance levels with each local area. The law requires that these negotiations take into account differences in economic conditions, participant characteristics, and services provided. To derive equitable performance levels, Labor and the states primarily rely on historical data to develop their estimates of expected performance levels. These estimates provide the basis for negotiations. WIA holds states accountable for achieving their performance levels by tying those levels to financial sanctions and incentive funding. States that meet their performance levels under WIA are eligible to receive incentive grants that generally range from $750,000 to $3 million. States that do not meet at least 80 percent of their WIA performance levels are subject to sanctions. If a state fails to meet its performance levels for 1 year, Labor provides technical assistance, if requested. If a state fails to meet its performance levels for 2 consecutive years, it may be subject to up to a 5- percent reduction in its annual WIA formula grant. At the end of program year 2001, four states received financial sanctions. Labor determines incentive grants or sanctions based on the performance data that states must submit each December in their annual reports. States also submit quarterly performance reports, which are due 45 days after the end of each quarter. In addition to the performance reports, states submit their updates for the Workforce Investment Act Standardized Record Data (WIASRD) every January. All three submissions primarily represent participants who have exited the WIA programs within the previous program year (July 1 – June 30). WIA also requires Labor to conduct at least one multi-site control group evaluation by the end of fiscal year 2005. WIA requires that evaluations address the general effectiveness of programs and activities in relation to costs and the impact of these services on the community and participants involved. WIA requires that states use the one-stop center system to provide services for many employment and training programs. Seventeen programs funded through four federal agencies are now required to provide services through the one-stop center under WIA. Table 2 shows the programs that WIA requires to provide services through the one-stop centers (termed mandatory programs) and the related federal agencies. Table 2. WIA’s Mandatory One-Stop Partner Programs and Related Federal Agencies Employment Service (Wagner-Peyser) Veterans ‘employment and training programs Senior Community Service Employment Program Employment and training for migrant and seasonal farm workers Employment and training for Native Americans Vocational Education (Perkins Act) Department of Health and Human Services Department of Housing and Urban Development (HUD) Under WIA, employers are expected to play a key role in establishing regional workforce development policies, deciding how services should be provided in the one-stop, and overseeing one-stop operations. Employers, who are encouraged to use the one-stop system to fill their job vacancies, are also seen as key one-stop customers under WIA. WIA performance data are useful for providing a long-term national picture of program outcomes; however, these data are less useful for providing information about current performance, and represent only a small portion of job seekers that received WIA services. UI wage recordsthe primary data source for tracking WIA performance—provide a fairly consistent national view of WIA performance and allow for tracking outcomes over time. At the same time, the UI wage records have some shortcomings—they cannot be used to track job seekers who get jobs in other states unless states share data; they do not cover certain categories of workers, such as self-employed persons; and they are not available on a timely basis. States are making progress in overcoming some of these shortcomings by sharing wage data with other states and supplementing information on participants not covered by the wage data. Despite this progress, time lags and other factors affect the timing of states’ reports on their annual performance to Labor and, subsequently, Labor’s reports to Congress. Most of the outcomes data reported in a given program year actually reflect participants who left the program during the prior year, limiting usefulness for gauging current program performance. In addition, the states’ annual reports reflect only a small portion of job seekers who receive WIA services because, under the law and Labor’s guidance, not all job seekers who utilize one-stop services are required to be included in the performance reports. WIA annual performance reportswhich provide a summary of states’ performance on the 17 core measuresare useful for providing a national perspective of outcomes achieved over time. The information presented in the annual reports compares states’ negotiated performance levels with their actual performance levels. (See table 3 for an example of national performance levels for WIA’s job placement ratecalled the entered employment ratein program year 2002.) These reports provide Congress with an annual picture of how well the WIA program is meeting its long- range goals to increase the employment, retention, and earnings of participants. The WIA performance data are also useful to help Labor assess quantitative, outcomes-oriented goals for its strategic plans, annual performance plans, and annual performance reports required by the Government Performance and Results Act (GPRA). In its annual performance report for program year 2002, Labor used the WIA outcome measures to assess its progress in meeting its strategic goals to increase employment, earnings, and assistance to adults and increase the number of youth in education or making a successful transition to work. Most of the performance outcomes in the annual reports are measured using UI wage records13 of the 17 WIA performance measures rely on UI wage records as the primary data source for tracking employment outcomes. (See table 4.) States maintain UI wage records to determine whether unemployed workers qualify for unemployment compensation. The records are compiled from data submitted to the state each quarter by employers and primarily include information on the total amount of wages paid to employees in the quarter. However, UI wage records for most states do not include information on the number of hours an employee worked during the quarter and when in the quarter the wages were earned. For example, the UI wage records for most states would not show that one employee may have worked 40 hours a week for the entire quarter and another worker may have worked 35 hours a week for the last two weeks of the quarter. The UI wage records would provide an overall snapshot of the total amount of wages paid to both employees for the quarter. The UI wage records provide a common yardstick for long-term comparisons across states because they contain wage and employment information on about 94 percent of the working population in the United States, and all states collect and retain these data. In addition, UI wage records can be used as a common data source to track employment outcomes across multiple programs, such as vocational education and the Temporary Assistance for Needy Families (TANF) programs. Further, researchers have found that wage record data are more objective and cost- effective than traditional survey information. For example, one state estimated that the cost of doing participant surveys, as was done under JTPA, was approximately $13.25 per participant compared with the cost of automated record matching to UI wage records, which costs less than $.05 per participant. UI wage records make it easier to track longer-term measures, such as those that assess earnings change, earnings replacement, and employment retention 6 months after participants leave the program. Without UI wage records, tracking these outcomes would require contacting or surveying former participants, perhaps multiple times, after they leave the program. UI wage records also have some shortcomings. State wage record databases only include wage information on job seekers within their state; they do not track job seekers who find jobs in other states. States cannot readily gain access to UI wage records from other states, making it difficult to track individuals who receive services in one state but get a job in another. To help gain access to wage information in other states, Labor established the Wage Record Interchange System (WRIS) a clearinghouse that makes UI wage records available to states seeking employment and wage information on their WIA participants, and states are increasingly making use of this option. Nationwide, 38 states reported that they currently participate in WRIS, an increase from the 15 states that told us they were planning to or participating in WRIS in 2001. States may also elect to establish their own agreements to share wage information with other statesoften those that share a common border. Seven of the 38 states reported that they maintain their own interstate agreements with other states and they also participate in WRIS. One state official we interviewed said the state maintains its own agreements in addition to WRIS so that the state can get data more quickly than through WRIS. According to a Labor official, states often retrieve wage record data from other states within a matter of days using WRIS. However, the process can take much longer up to a couple of weeksif participating states take longer to respond to requests. In addition, even though UI wage records contain information on about 94 percent of workers, they do not contain information on certain employment categories of workers, such as self-employed persons, most independent contractors, military personnel, federal government workers, and postal workers. To compensate for the 6 percent of workers who are not in the UI wage records, Labor allows states to report employment outcomes using other data sourcesfor example, by contacting participants after they leave the programto track WIA participants who are employed in these uncovered occupations. We found that 39 states reported relying on this supplemental information to report on participants not covered by the wage data. Twenty-three states told us that without the supplemental data, they would not have been able to show that they met minimum performance levels on at least one measure, and 10 of these states said they would not have been able to show that they met minimum performance levels on 10 of the measures in program year 2001. (See fig. 1.) Labor also allows states to use other employment and administrative data sources to track employees excluded from the UI wage records, such as the U.S. Office of Personnel Management, the U.S. Postal Service, and the U.S. Department of Defense. Eight states reported that they currently fill gaps in coverage using other administrative and employment data sources. Labor has recently established an agreement with the U.S. Office of Personnel Management and is working on agreements with the U.S. Department of Defense and the U.S. Postal Service to obtain employment data through a clearinghouse similar to WRIS to help more states obtain this outcome data. Labor plans to begin testing this new clearinghouse in program year 2004. (See app. II for a detailed listing of states’ use of UI wage records and other data sources for reporting on WIA outcomes.) UI wage records also suffer significant time delays between the time an individual gets a job and the time it appears in the UI wage records. State procedures for collecting and compiling wage information from employers can be slow and time-consuming. Data are collected from employers only once every quarter, and employers in most states have 30 days after the quarter ends to report the data to the state. For example, the wage report for the last calendar quarter of the year (ending on December 31) is due to the state on January 31. After the state receives the wage report, the data must be processed. Many employers report the data electronically, but some employersespecially small businessesare allowed to submit data in paper format, which then must be converted to electronic media. After data entry, information must be checked for errors and corrected. All these steps take time, which can delay the availability of the wage record data for reporting on outcomes for several months. According to our survey, 28 states estimated they get information on job placement within 4 months after participants exit the program, and 44 states have this information within 6 months. (See fig. 2.) The time lags in receiving wage data, together with the use of longer-term outcome measures, affect when outcomes are reported and limit the data’s usefulness for gauging current performance. All 13 of WIA’s employment- related outcomes are measured after participants leaveor exitthe program, and some measures, such as those that assess wage changes and employment retention, require a 6-month wait. To compensate for time lags, Labor devised a reporting structure that reaches back to the prior year to provide a complete year’s worth of outcome data on WIA participants for the annual reports. For example, for the employment- based measures, participants who are reported on in the program year 2002 annual report, provided to Labor in December 2003, left WIA in the four quarters between October 2001 and September 2002 and may have received services much earlier. The amount of time between when participants receive services and when their outcomes are reported to Labor varies, but it is about 1½ years at a minimum. A hypothetical example will illustrate this point by showing two participants that would be included in the program year 2002 report. Sue registered in April 2001, participated in the program for at least 6 months, and left between October and December 2001, taking about 32 months from the time of registration until her outcomes were reported. Joe, on the other hand, did not register until July 2002, participated and left the program within 3 months, taking about 17 months from the time of registration until his outcomes were reported. (See fig. 3.) WIA performance data represent a small proportion of the job seeker population receiving services at one-stops, making it difficult to know what the overall WIA program is achieving. Most one-stop customers who participate in self-directed services and only receive limited staff assistance, for example to conduct a job search, are not reflected in the WIA performance reports. This group is estimated to be the largest portion of customers served under WIA. For example, one of the local areas in our study that tracks each one-stop customer told us that only about 5.5 percent of the individuals who walked into their one-stops in fiscal year 2003 were registered for WIA services. The current law excludes job seekers who receive services that are self-service and informational in nature. Labor’s guidance tells states to register adults and dislocated workers who receive core services that require significant staff assistance designed to help with job seeking or acquiring occupational skills, but states have flexibility in deciding what constitutes significant staff assistance. As a result of this flexibility, some local areas register a smaller proportion of participants than others, and in an earlier report, we said the local areas differed on when they registered WIA customers. In our recent visits to 4 states, we found that states and localities still differ on whom they tracksome local officials said they register job seekers who received core services that required significant staff assistance, and others said they do not register participants until they receive intensive services. In addition, 21 of the 50 states we surveyed reported that they have instituted their own policies to more specifically define when registration should occur, suggesting that there is variation in interpreting Labor’s guidance. Some experts told us that local workforce areas do not get adequate credit for serving everyone, making it difficult to show what is being achieved with available funding. With assistance from states, local areas manage WIA performance and assess one-stop centers by collecting timely performance data and making use of a variety of performance information. To understand how well they are doing in meeting their performance levels, most local areas directly contact former participants or employers to collect interim WIA performance data that are not readily available from UI wage records. States provide assistance to local areas in a variety of ways, ranging from supporting their information technology (IT) systems to training local area staff. While states and local areas must meet performance goals for WIA, no similar goals exist for the overall one-stop systemthe service delivery system required under WIA for most federally funded employment and training services. Nonetheless, some states and many local areas have developed a range of measures to help them assess how well the one-stop is doing. Despite the progress states and local areas have made in developing and using interim outcome information, states and local areas told us they would like more help from Labor in collecting and disseminating promising practices on interim ways to assess WIA performance. According to our survey, many states play an active role in helping local areas monitor how well they are doing in meeting their performance levels. The assistance they provide ranges from ensuring that local areas have ready access to participants’ UI wage records to developing IT systems and training local area staff on implementing WIA performance measures. To ensure local areas have ready access to wage record information on their participants, 23 states reported that they give local areas some form of electronic access to UI wage data for their WIA participants. Ten of these states give local areas direct online access to the UI wage reporting system, making information available to local officials as quickly as it is reported to the state; the others give local areas access once information has been merged into the statewide WIA reporting system. According to officials in Florida, having direct access to UI wage data allows them to not only monitor performance levels but also develop industry and wage profiles, tailor training programs to meet regional needs, and obtain contact information for former participants, facilitating follow-up with individuals that would not be found otherwise. When states do not give local areas ready access to UI wage data, as might occur in states with restrictive privacy laws, state officials usually provide local areas with standard reports on their WIA progress, either for individual WIA participants or, most often, aggregated across all WIA participants in the local area. In addition to helping provide timely information, almost all states are supporting local areas’ IT efforts. According to our survey, 47 states have established or are in the process of establishing statewide IT systems to help local areas organize, track, and report WIA performance data. In about three-fourths of these states, the statewide IT systems allow the local areas to produce special reports that are tailored to local tracking needs and can report information for the local areas to use at the one-stop center, service provider, or case manager level. Although most local areas reported that they use a statewide system to help meet federal reporting requirements, half also use a locally developed IT system in combination with a statewide IT system. Local officials we met with often commented that they use a separate IT system because they do not find their state systems useful for managing the day-to-day operations of a WIA program. As a result of needing both a statewide and a local system, almost half of local areas reported that at least some, if not all, of their one-stop staff must enter the same WIA information into at least two IT systems. Most states provide a range of other support services to local areas to help them manage their WIA performance requirements and to understand what implementation approaches work better than others in providing one-stop services, according to our survey. States reported they most often provide local areas with more specific written guidance or notices that explain federal guidance on the WIA performance measures and performance reports. In addition, about 90 percent of local areas told us that their states conduct training, make presentations, and hold regular meetings with local staff about WIA performance measures. To help local areas better understand what implementation approaches work better than others, several states have conducted special studies of the one-stop system. Sixteen states told us they have recently conducted studies on program implementation and processes; 11 states told us they have done return-on-investment studies; and 4 states have done impact evaluations that use control groups. Nationwide, half of the local areas believe that having a strong relationship with their state greatly helps them achieve their WIA levels. Because UI wage data suffer from time delays, about three-fourths of local areas collect outcome information from other sources to help them assess whether they are meeting their WIA performance levels and to help them manage their programs. Over 75 percent of local areas reported that they directly follow up with participants after they leave the program, collecting job placement or earnings information to help fill gaps until the data are available from the UI wage records. Sometimes local officials will also follow up with employers to verify employment or collect other documentation, such as pay stubs or W-2 forms, as verification of employment. If outcomes do not appear in UI wage records over time, many local areas will report the findings from these other data sources in their WIA performance reports to the state. Local officials in a rural area of Pennsylvania told us that collecting this interim outcome data is important to help them assess their progress in meeting their performance levels, so much so that they provide small gift certificates to former participants who periodically report back to WIA staff. According to these officials, this strategy of obtaining follow-up data saves considerable staff time as well as increases their performance levels by more completely capturing information on participants. Nearly all of the local areas reported on our survey that they track other types of interim indicators to manage their WIA programsmost often the number of registered WIA participants, services provided to WIA participants, number of participants that have completed training, and number of WIA exiters. Over half of these local areas report these data to decision makers on at least a monthly basis. About 80 percent of local areas track some kind of cost information, such as cost per participant or cost per outcome, and 24 percent report this information at least monthly. (See fig. 4.) Although these indicators may not be directly tracked and reported under WIA, they are useful for helping local officials know the number of participants that will be counted in their WIA measures. Furthermore, in some cases, these interim indicators also help the local areas predict their WIA performance outcomes. For example, one local official told us that knowing the number of participants who complete training helps him predict the number of participants who will find a job. Overall, nearly half of local areas reported that this type of interim information greatly helped them meet or exceed their performance levels. Despite the progress states and local areas have made in developing and using interim outcome information, nearly all states and local areas reported they would like more help from Labor in collecting and disseminating promising practices on interim indicators to assess WIA performance. Because meeting WIA performance levels may affect future funding, most local areas hold service providers accountable and actively monitor their WIA performance levels. Through our survey, we found that over 80 percent of local areas hold their service providers accountable by incorporating negotiated performance levels in their contracts. In addition, nearly 80 percent of local areas establish goals for the number of participants who are registered in or exited from WIA. A lesser number of local areas24 percentestablish pay for performance contracts, and 18 percent provide financial incentives to their service providers. (See table 5.) Officials from one local area that we visited told us they provide monetary bonuses to providers that exceed their WIA goals and withhold 20 percent of their payments for those providers that do not reach their WIA goals. In addition, over 80 percent of local areas nationwide reported that having staff devoted to monitoring and managing WIA performance greatly helps them achieve or exceed their levels. Once final WIA performance information is available, local areas use this information to assess program services over time and to guide future program development. Most often local areas reported they use WIA performance information to modify their programs. We found that about two-thirds of local areas use performance information to a great extent to help them identify areas for program improvement and adopt new program approaches. Over half of local areas use their WIA performance information to analyze trends over time and prepare strategic plans. (See fig. 5.) While WIA requires that officials monitor outcomes for all job seekers who receive staff-assisted core, intensive, and training services funded by WIA, there is no requirement to track those who receive self-directed core services, which may be the majority served under WIA. In addition, Labor does not require that states and local areas measure the overall performance of the one-stop system. Nonetheless, most states and local areas have developed ways to assess the performance of their one-stops, using four basic types of indicatorsjob seeker measures, employer measures, program partnership measures, and family and community indicators. (See fig. 6.) Even without a federal requirement to do so, according to our survey, almost 90 percent of local areas gather information on one-stop job seekers, even if they are not registered and participating in any particular federal program. Most often local areas reported that they require the one- stop centers to track and report the number of job seekers who visit the one-stop in a single time period, usually through a paper and pencil or computer log. We also found that 58 percent of the local areas are collecting information on job seekers that repeatedly visit one-stop centers, sometimes through electronic means. About 20 percent of all local areas reported using electronic swipe cards to track job seekers in their one-stop centers. These swipe cards, similar to membership or grocery store discount cards, are issued to each job seeker using the one-stop and contain unique identifying information that can be read each time the job seeker accesses services. For example, according to local officials in Philadelphia, they issue swipe cards to job seekers and scan these cards to record both the services receivedsuch as using computers in the resource room, attending orientation workshops, or talking with the case managersand the date and time the services were provided. Using data from this system, one-stop managers can assess traffic flow and schedule staff accordingly, and may eventually be able to link participants and services to outcomes achieved. Officials also told us they are using demographic information from an analysis of swipe card data to target marketing efforts and to develop services more strategically. In addition to counting the number of job seekers who visit the one-stop center, we found that local areas are tracking information on how many program referrals they receive, how satisfied they are with services, and what types of outcomes they achieve. Over half of local areas reported that they survey job seekers who visit the one-stop to gauge their satisfaction with services. For example, a one-stop center in Utah that we visited not only uses a one-stop satisfaction survey, but officials also periodically contact one-stop customers to ask how they liked the services. According to our survey, some of the local areas said that having job seeker satisfaction information was one of the best ways to assess the one-stop system. Many local areas collect more in-depth information on all one-stop job seekersover one-third collect demographic characteristics, and over one-fourth monitor outcomes, such as whether job seekers got a job and at what wages. (See fig. 7.) Many local areas also track information on employers’ use of one-stops. About 70 percent of local areas nationwide reported that they require one- stop centers to track some type of employer measure, such as the number of employers that use one-stop services, how many hire one-stop customers, and the type of services that employers use. To gauge employer involvement, local areas most often require the one-stops to count and report the number of employers that use one-stop services. Over 40 percent of local areas require one-stops to track the number of employers that repeatedly use one-stop services. For example, a one-stop center in Utah we visited tracks employers that repeatedly use one-stop services and those that have not used services in a while. It uses this information to reach out to employers who have not returned for services, encouraging them to use one-stop services again. To understand how employers view the one-stop services they received, 60 percent of local areas reported they collect information on employer satisfaction. A smaller numberabout 20 percent of local areastrack information on market penetration, such as the number of employers in the labor market that could potentially use one-stop services. For example, Philadelphia officials told us they measure market penetration by comparing the number of employers that use the one-stop center with the number of employers in the community as a whole. (See fig. 8.) Most of the programs that provide services through the one-stop system have their own performance measures, but as we have reported in the past, these measures cannot be readily summed to obtain an overall measure of one-stop performance. However, one-third of the local areas told us that they combine in one report some of the key federal measures for the various one-stop programsincluding wages at employment or other earnings indicatorsand use this report to assess the one-stop system as a whole. For example, Florida officials produce a reportcalled the Red and Green reportthat assembles for each local area outcomes on 22 measures from different one-stop programs, such as WIA, Wagner- Peyser, and TANF. Weaker program outcomes are identified in red and stronger outcomes are identified in green. They use this report to assess performance, diagnose weak spots, and predict long-term outcomes across one-stop partners. More often local areas have gone a step further and have identified outcomes they consider to be key, developing common definitions for these measures to be used across programs. Just over half of the local areas reported in our survey that they track cross-cutting employment measures, such as job placement, and a little less than half said they track wages at placement and employment retention across programs. For example, Utah developed a set of outcome, process, efficiency, and activity measures to gauge the performance of all of their one-stops and to ensure alignment with agency goals and objectives. These measures include entered employment, earnings increase, and employment retention across Wagner Peyser, WIA, TANF, Trade Adjustment Assistance, and Food Stamp Employment and Training programs. In addition to tracking outcomes for the various one-stop partners, some local areas assess their one-stop systems by measuring the level of coordination among one-stop partners, as well as the range and quality of services they provide. Nearly 40 percent of local areas we surveyed said that they use indicators, such as increased coordination among partners and number of referrals partners made, to assess how well the overall one- stop system is operating. For example, one local area reported it is developing a one-stop report card that will track the flow of customers through the system and monitor each program’s contribution to the services provided, including the results of program referrals. They will use this report card to target areas that need attention. To ensure the one-stop system is providing quality services, some local areas we visited also conduct mystery shopper reviews wherein individuals posing as employers or job seekers evaluate the quality of the services they receive. Michigan conducts such mystery shopper visits of all their one-stops over the course of a year to assess the quality of customer services, including how courteous, professional, and knowledgeable one-stop staff are. The state receives a comprehensive report of each visit and uses this information to target technical assistance. A few local areas look outside their one-stops to assess how well one-stop services are meeting the needs of the family and the community. In their written comments to our survey, several local areas told us that they consider some type of community indicator, such as changes in the local unemployment rate or increases in the average household income in the local area, to be the best way to determine the overall effectiveness of their one-stop system. Some local areas focus on indicators of family well- being, such as family self-sufficiencyor the ability of families to financially support themselvesto assess whether their one-stop systems are meeting family needs. One rural one-stop in Michigan even uses some indicators that are not related to income. These local officials told us that their indicators include a collection of family indicators, including whether families are getting the child care they need and how well the children are doing at home and at school, to understand how well the one-stop is meeting the needs of the family. Although Labor has taken steps to improve WIA’s performance measurement system and assess one-stops, some of its efforts do not go far enough. Labor has commissioned a study of adjustment methods that would better take into account economic and demographic differences when negotiating performance levels. However, even if an acceptable model is developed, Labor has made no commitment to put a standard adjustment method in place nationally. To improve the quality of WIA’s performance data, Labor has initiated a data validation project. Labor is taking a significant step toward measuring one-stop outcomes, but a planned change may lead to restricting the use of supplemental data to fill gaps in UI wage records. While Labor has plans to conduct impact studies, the department will not meet WIA’s requirement to conduct an impact study by 2005, and without such a study, little will be known about WIA’s effectiveness. Labor has commissioned a study of adjustment methods that could be used to set expected performance levels during the negotiations process, but this effort does not go far enough. WIA requires that annual negotiations to establish expected performance levels consider differences in economic conditions, participant characteristics, and services providedfactors that can have a significant effect on the performance levels states and local areas are expected to achieve. However, many of the state and local officials we interviewed said they did not think these factors were adequately addressed in the negotiations process, and as a result they think some of their performance levels were set too high for the current economy. For example, some local officials said that their negotiated performance levels on the earnings change and earnings replacement measures were based on a stronger economy and did not reflect recent increases in the unemployment rate. Nationwide, 22 states reported that they are at risk of not meeting at least 80 percent of their negotiated performance levels on one or more of the WIA measures for program year 2002. (See fig. 9.) Further, 10 states reported that they are at risk of receiving financial sanctions on one or more measures for program year 2002. To address states’ concerns, Labor has commissioned a study of adjustment methods, such as the type of model used under JTPAone that adjusted for factors beyond the control of local programs, such as high unemployment or a high concentration of non-English-speaking program applicants. The JTPA model assigned adjustment factors and weights for each performance measure using a multiple regression analysis, predicting how well a local area might do based on the relevant factors. For example, the model would assign a lower expected performance level to a local program serving extremely disadvantaged participants in an economically depressed area and a higher expected performance level to a local program serving job seekers who are nearly ready to get a job in an area with good economic conditions. All states and nearly all local areas we surveyed told us they would like Labor to use a model that can adjust for varying economic and population factors. Although Labor is studying adjustment methods, even if an acceptable model is developed, it has made no commitment to implement such an adjustment method nationally. Some states currently use their own adjustment model or other methods in the negotiation process to account for factors beyond the control of local programs, but Labor has not yet taken steps to increase consistency across states as it did under JTPA. According to our survey, we found that nine states used a regression model or other method to a great extent to establish their performance levels for negotiating their program year 2004 performance levels with Labor. Under JTPA, Labor allowed states flexibility to develop their own adjustment procedures, but it established standard parameters to govern the adjustment methods used by states. These parameters addressed the procedures for adjusting performance levels, the quality of data, and factors that could be used for adjustments. For example, the procedures for adjusting performance levels were required to be objective and equitable across all local areas. In addition, Labor developed optional adjustment models that could be used by states because it recognized that not all states and local areas have the expertise and resources necessary to develop adjustment procedures. Without standard parameters, the process will lack consistency, and some states may be at a disadvantage in the process of negotiating their performance levels. Issues have been raised about the quality of performance data that Labor uses to assess program performance. As we mentioned previously, Labor allows flexibility in determining which participants to track for reporting purposes. This flexibility leads to variations in reporting, which raises questions about both the accuracy and the comparability of states’ performance data. In addition, we recently reported that performance data submitted by states in quarterly and annual reports were not sufficiently reliable to determine outcomes for the WIA programs. Furthermore, Labor’s Office of Inspector General has said that there is little assurance that the states’ performance data for all WIA programs are either accurate or complete because of inadequate oversight of data collection and management at the federal, state, and local levels. Labor has initiated a new data validation project to improve the quality of the performance information collected and reported under WIA. Labor’s data validation project includes developing procedures and accuracy standards to help states validate that WIA performance and participant data are correctly reported. For this project, Labor developed data validation handbooks and software and required states to begin validating program year 2002 data, which were reported to Labor on December 1, 2003. States are required to conduct two types of data validation: (1) review samples of WIA participant files and (2) assess whether reporting software accurately calculated the performance measures. Labor provided software to help states generate the aggregate information required for performance reports, such as performance outcomes. If states elect to use Labor’s software, they are not required to validate the calculations. At the time of our surveyDecember 2003 through February 2004we found that 41 states had begun using Labor’s data validation software. Labor also plans to hold states accountable for meeting accuracy standards, beginning in the third year of validation. Once these accuracy standards are in place, states failing to meet the standards may lose eligibility for incentive awards or, in cases with significant deviations from the standards, states may be sanctioned. Labor is taking a significant step toward measuring outcomes across one- stop partners by developing definitions for a set of common performance measures. The Office of Management and Budget established a set of common measures to be applied to all federal employment and training programs administered by Labor, Education, Health and Human Services, Veterans Affairs, Interior, and Housing and Urban Development. (See table 6.) Labor has developed standard definitions for calculating these measures across all of its Employment and Training Administration programs. (See table 7.) This will allow Labor to sum outcomes across all its programs to provide a more uniform picture of outcomes achieved. According to a department official, Labor worked with other federal agencies to get agreement on common data sources and common language, where possible. For example, Labor is working on developing a process, using WRIS, that would allow other federal programs to use UI wage records to track outcomes. As part of the common measures, Labor plans to require one-stops to track all participants who walk through the door of a one-stop center and receive any one-stop service, regardless of which program provides the service. According to Labor, tracking all one- stop job seekers will enable officials to obtain information about who is served, what services are provided, which partner programs provided services, and what outcomes are achieved. While these changes can provide more information on job seekers, there is no provision for any measure of employer involvement in the one-stops, and experts and state and local officials we interviewed said that at least one measure is needed to address employer usage. While most of Labor’s policies for the common measures can advance measurement across one-stop partners, Labor plans to rely almost entirely on the UI wage records and discontinue the use of supplemental data for filling gaps in UI wage records. Labor officials tell us that they are making this change to address concerns about the quality of supplemental data being collected. Under Labor’s current guidance, supplemental data must be documented. However, the department has no systematic process in place to monitor the accuracy of these supplemental data. If Labor elects to replace the current definitions of the WIA entered employment rate and earnings retention measure with the common measure definitions, this restriction on the use of supplemental data could have a significant impact on the ability of states and local areas to meet their negotiated performance levels. In addition, Labor’s new data validation project could help ensure the accuracy of supplemental data that is collected at the local level. While Labor has plans to conduct impact studies, the department will not meet WIA’s requirement to conduct at least one multi-site control group evaluation by fiscal year 2005. This type of impact study is important because outcome measures alone cannot show whether an outcome is a direct result of program participation or whether it is a result of other influences, such as the state of the local economy. Labor officials said they did not initiate impact studies of WIA within the first few years after WIA passed to allow states and local areas time to implement the considerable changes that were required under WIA. According to officials, Labor had planned to initiate an impact evaluation of the WIA adult and dislocated worker programs in 2004, but this plan is currently on hold because Labor is anticipating changes to these programs as a result of reauthorization. Once WIA is reauthorized, Labor officials told us that they would likely allow 2 or 3 years for changes to be implemented before initiating an impact evaluation. The evaluation itself will take 5 to 6 years, but Labor plans to issue interim reports on the findings once the study is under way. Even though the House passed a reauthorization bill, the Workforce Reinvestment and Adult Education Act of 2003 (HR1261), and the Senate passed a bill, the Workforce Investment Act of 2003 (S1627), passage of a final bill has stalled. Both bills propose changes to WIA, but most of the basic one-stop service delivery and governance structure would stay the same in both bills. Given that these changes will not likely affect the fundamental service delivery and structure of WIA, it is unclear why Labor has not proceeded with its evaluations of WIA as planned. When WIA was implemented nearly 4 years ago, it fundamentally changed the way federally funded employment and training services are provided to job seekers, the way the system engages employers, and the way it measures performance. Making this shift has taken time and some trial and error as state and local policy makers and one-stop service providers learned what type of service structure met local needs. Since implementation, states and local areas have made great progress in retooling their systems and in gathering all the data needed to report on their performance to Labor. But, only recently are we getting a nationwide glimpse of outcomes achieved under WIA. The requirement to use UI wage data is a step in the right direction by providing a reasonably consistent look at national program results over time. Historically, there have been data quality issues with outcome data collected directly from participants, as was done prior to WIA. The UI wage data provide a level of credibility that other data sources do not have. States have made progress in accessing data from other states. But in order to meet their performance levels, some states must continue to rely on other data sources to fill gaps. Out of concern for data comparability, Labor is proposing to limit the use of data from other sources. This decision, if applied to the WIA programs, will hinder the ability of some states to demonstrate that they have met their expected performance levels and may cause one-stops to focus their efforts on only those occupations covered by the UI wage records. This policy also seems overly restrictive, given that Labor is implementing data validation procedures that could be used to ensure the accuracy and validity of supplemental data. Even with the capability to use supplemental data, some states and local areas have failed to demonstrate that they met their negotiated performance levels for 2 years in a row and have suffered financial sanctions—often citing local economic conditions as the cause. The development of a method to systematically adjust for economic and demographic factors outside the control of the local area in setting expected performance levels could help mitigate these concerns. While the use of UI wage records has improved the quality of the data that are used to track outcomes under WIA, this information alone does little for real-time program management. We found that state and local officials have made significant strides in collecting their own data to assess whether they are likely to meet their federally required performance levels, manage their programs on a real-time basis, and track a broader one-stop population than just registered WIA participants. In some ways, the WIA performance measures based on the UI wage records and the interim data collected at the state and local level provide a useful system to cross check these data. However, not all states and local areas have determined what interim information is necessary, nor have they had the benefit of learning from their peers. Without some additional information or the sharing of promising practices, these states and local areas will be at a disadvantage in monitoring their progress and, perhaps, in meeting their minimum performance levels. Further, Labor has also failed to meet WIA’s requirement to conduct a systematic evaluation of WIA. Plans to do an evaluation have been delayed until reauthorization is complete, even though the proposed bills would retain most of the WIA service delivery and governance structure. Delays in committing to an evaluation now may be costly because policy makers will not be able to benefit from an understanding of WIA’s effectiveness. Without clear guidance from Labor, states and local areas continue to struggle with determining who should be tracked in the WIA performance measures. At the same time, even if states and localities had a common understanding of whom to track and were consistently reporting on the same categories of customers, they would only be reporting on a small portion of overall one-stop customers. While a requirement to track all job seekers who visit the one-stops may appear to be a major change, we found that over half the local areas already collect information on job seekers that repeatedly use one-stops, suggesting that some local areas are already equipped to uniquely identify and track each job seeker. It may take time and resources for local areas to fully develop the capability to collect data on each job seeker, but this may be the best way to start gauging the value of one-stops overall. As long as the law excludes individuals who participate in self-service and informational services, it will be difficult to understand the full reach of WIA. To compensate for the impact of changes in the economy and to give states and local areas an equal opportunity to meet their performance levels, we recommend that the Secretary of Labor continue to allow the use of supplemental data for reporting outcomes, but develop more stringent guidance and monitoring of these data; provide assistance to states and localities in developing and sharing promising practices on interim indicators for assessing WIA’s performance; and develop an adjustment model or other systematic method to account for different populations and local economic conditions when negotiating performance levels. To comply with statutory requirements and to help federal, state, and local policy makers understand what services are most effective for improving employment-related outcomes, we recommend that the Secretary of Labor expedite efforts to design and implement an impact evaluation of WIA services. We suggest that Congress may wish to consider requiring that information be collected and reported on all WIA participants, including those who only receive self-service and informational services, so that Congress may have a better understanding of the full reach of WIA and the one-stop system. We provided a draft of this report to Labor for review and comment. Labor generally agreed with recommendations about continuing the use of supplemental data, sharing promising practices on interim performance indicators, and developing an adjustment model or other systematic method for use in negotiating performance levels. In addition, Labor agreed with our matter for Congressional consideration that information be collected and reported on all WIA participants. However, Labor disagreed with our recommendation to expedite efforts to design and implement an impact evaluation of WIA services. We have incorporated Labor’s comments in our report, as appropriate. A copy of Labor’s response is in appendix III. On our recommendation regarding the use of supplemental data for reporting outcomes under WIA, Labor responded that it will continue to allow supplemental wage data except when calculating results on the common measures that are reported to the Office of Management and Budget. Labor also told us that its ongoing data validation effort will collect additional information that will help assess the quality of supplemental wage data that states are reporting. We continue to believe that when assessing state and local progress toward meeting WIA’s expected performance levels, supplemental data will be essential to gather a more complete picture of WIA outcomes. On our recommendation to develop and share promising practices on interim indicators for assessing WIA’s performance, Labor noted some of the efforts currently under way to facilitate information exchange, including state and local peer-to-peer alliances, Labor’s promising practices web site, and a Performance Enhancement Project for states to share ideas and promising practices. However, despite Labor’s ongoing efforts to facilitate information exchange, nearly all states and local areas reported on our survey that they would like more help from Labor in collecting and disseminating information on promising practices on interim indicators to assess WIA performance. Regarding our recommendation to develop an adjustment or other systematic method for use in negotiating performance levels, Labor agreed with the importance of taking economic conditions and characteristics of the population into account when setting performance expectations. Labor noted the study it has commissioned on adjustment models that we cited in our report and said the results of this study are not yet available. Labor expressed concern that any systematic method for taking economic and demographic factors into account must not diminish the role of the states and local areas in setting strategic goals. Our recommendation for a systematic approach would not replace any state and local efforts to establish their own goals, but it could help make the national process for setting goals more uniform and provide tools for states and local areas that do not have the resources to develop their own adjustment procedures. In response to our recommendation to expedite the design and implementation of an impact evaluation of WIA services, Labor told us that it believes the program consolidation changes proposed in the reauthorization bill passed by the House are significant enough to delay the multi-site evaluation required by WIA. However, we disagree that proposed reauthorization changes would significantly affect the basic one- stop service delivery structure under WIA. It is now 4 years past the full implementation of WIA and a well-designed evaluation would help inform policymakers in the future. Waiting for the implementation of any changes resulting from the current reauthorization cycle would likely delay the start of an evaluation at least 2 years, thus not having results available until after another reauthorization cycle has passed. We are sending copies of this report to the Secretary of Labor, relevant congressional committees, and others who are interested. Copies will also be made available to others upon request. The report is also available on GAO’s home page at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix III. We examined (1) how useful WIA performance data are in gauging program performance, (2) what local areas are doing to manage their WIA performance and assess one-stop success on a timely basis and how states are assisting these efforts, and (3) to what extent Labor is trying to improve WIA’s performance measurement system and assess one-stop success. Our review focused primarily on the employment-based measures that rely on UI wage records—entered employment rate, earnings change/replacement rate, employment retention rate, employment and credential rate, and the younger youth placement and retention rate. To address these questions, we conducted two surveys—one of state WIA officials and one of local area workforce officials; reviewed different types of literature about WIA and the WIA performance measurement system; interviewed experts and Department of Labor officials; interviewed state and local WIA officials; and visited three states and two local areas or one- stops within each state. We supplemented our site visits with telephone interviews with state and local officials in Pennsylvania We provided a draft of this report to officials at the Department of Labor for their review and incorporated their comments where appropriate. We conducted our work from April 2003 through April 2004 in accordance with generally accepted government auditing standards. To obtain further information on the area of WIA performance management, we reviewed and analyzed numerous studies, reports, and other literature, and we interviewed experts on WIA and workforce development performance measurement. We reviewed a Department of Labor study that discussed costs of data collection and found it sufficiently reliable for the purpose of comparing costs of surveys and automated record matching to UI wage records. We also interviewed Department of Labor officials, as well as representatives of the National Governors’ Association and the National Association of Workforce Boards. To determine how useful WIA performance data are in gauging program performance and what states and local areas are doing to manage and assess WIA programs and one-stop systems, we surveyed all 50 states and the District of Columbia, as well as all existing local workforce investment areas, using similar but not identical questionnaires. We conducted both surveys via the Internet. We asked both groups to provide information on issues related to the WIA performance measures, such as state or local policies, the availability and use of UI data, WIA performance levels; management practices; information technology systems, efforts to monitor and manage their WIA programs and one-stop systems, factors that adversely affected their ability to assess their one-stops systems, and the types of technical assistance that would help with managing their one- stop systems’ performance. We pre-tested the questionnaires used for each of the surveys at least three times. Table 8 provides survey numbers and response rates for both surveys. Because these were not sample surveys, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or how the data are entered into a database can introduce unwanted variability into the survey results. We took steps in the development of the questionnaires, the data collection, and data analysis to minimize these nonsampling errors. For example, as already noted, we pretested the questionnaires to ensure that questions were clear and understandable. In that these were Web-based surveys whereby respondents entered their responses directly into our database, there was little possibility of data entry error. In addition, we verified that the computer programs used to analyze the data were written correctly. We visited three states—Florida, Michigan, and Utah—and traveled to at least two local areas or one-stop centers in each of these states. We supplemented our site visits with telephone interviews with state and local officials in Pennsylvania. (See table 9 for a list of the states and local areas in our study.) Based on input from recognized experts and our literature review, we selected these states because they are geographically diverse, have experience in implementing additional performance measures to assess one-stop success, and have developed integrated statewide data systems. In each state, we interviewed state officials responsible for monitoring local areas’ WIA programs and analyzing and reporting on the state’s WIA performance data, as well as other state WIA and IT officials and staff of the state’s Workforce Investment Board. At the local areas, we interviewed WIA officials and staff, including service providers, staff responsible for performance management issues, IT staff, case managers and other frontline staff, as well as staff of the local area Workforce Investment Board. The state and local interviews were administered using a semi- structured interview guide that we developed through a review of relevant literature and discussions with recognized experts on WIA performance management. Information that we gathered on our site visits represents only the conditions present in the states and local areas at the time of our site visits, from June through October 2003. We cannot comment on any changes that may have occurred after our fieldwork was completed. Furthermore, our fieldwork focused on in-depth analysis of only a few selected states and local areas or sites. Based on our site visit information, we cannot generalize our findings beyond the states and local areas or sites we visited. Carolyn S. Blocker and Cheri Harrington made significant contributions to all phases of the effort. Stu Kaufman made significant contributions in the design and administration of the surveys. In addition, Jessica Botsford provided legal support; Avrum Ashery and Barbara Hills provided graphic design assistance; and Elizabeth Curda, Patricia Dalton, Catherine Hurley, and Shana Wallace also provided key technical assistance. National Emergency Grants: Labor is Instituting Changes to Improve Award Process, but Further Actions Are Required to Expedite Grant Awards and Improve Data. GAO-04-496. Washington D.C.: April 16, 2004. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington D.C.: February 23, 2004. Workforce Training: Almost Half of States Fund Employment Placement and Training and Employment through Employer Taxes and Most Coordinate with Federally Funded Programs. GAO-04-282. Washington D.C.: February 13, 2004. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Workforce Investment Act: Issues Related to Allocation Formulas for Youth, Adults, and Dislocated Workers. GAO-03-636. Washington, D.C.: April 25, 2003. Workforce Training: Employed Worker Programs Focus on Business Needs, but Revised Performance Measures Could Improve Access for Some Workers. GAO-03-353. Washington, D.C.: February 14, 2003. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. Workforce Investment Act: States’ Spending Is on Track, but Better Guidance Would Improve Financial Reporting. GAO-03-239. Washington, D.C.: November 22, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Youth Provisions Promote New Service Strategies, but Additional Guidance Would Enhance Program Development. GAO-02-413. Washington, D.C.: April 5, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: Oct. 4, 2001. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000.
With rising federal deficits and greater competition for public resources, it is increasingly important for federal programs, such as the Workforce Investment Act (WIA) programs, to show results. This report examines (1) how useful WIA performance data are for gauging program performance; (2) what local areas are doing to manage their WIA performance and assess one-stops on a timely basis, and how states assist these efforts; and (3) the extent to which the Department of Labor is trying to improve WIA's performance measurement system and assess one-stop success. WIA performance data provide a long-term national picture of outcomes, but these data offer little information about current performance and represent a small portion of job seekers who received WIA services. Unemployment Insurance wage records--the primary data source for tracking WIA performance--provide reliable outcome information over time. But they have shortcomings, such as not including some categories of workers, and considerable time lags before data are available. Many states rely on alternative data sources to fill gaps in the wage records. However, the time between when a participant receives services and when their outcomes are reported to Labor can range from about 1 1/2 to 2 1/2 years or longer. In addition, states' annual reports reflect only a small portion of job seekers who receive WIA services because of restrictions in the law and policies of Labor. With assistance from states, many local areas collect interim outcome information from former participants or employers and use other interim indicators to track WIA performance levels long before wage record data are available. However, states and local areas would like more help from Labor in disseminating best practices on interim performance measures. In addition, these efforts tell them little about the performance of their overall one-stop systems. Many states and local areas rely on other indicators--job seeker measures, employer measures, program partnership measures, and family and community indicators to assess their one-stops. Labor has taken steps to improve WIA's performance system and assess onestops, but could do more. Although Labor is studying adjustment methods that could better take into account local differences when negotiating performance levels, it has not committed to using such a method nationally. Labor also has efforts to improve the quality of WIA's performance data and is developing a set of common measures for one-stop partner programs. Yet as part of the common measures, Labor plans to restrict the use of alternative data. Labor has also delayed plans to conduct an impact evaluation and will not meet its statutory requirement to do so by 2005.
In 1993, the most recent year for which published Uniform Crime Reporting data were available, there were 142,520 arrests in the United States for forcible rape and other sexual offenses. Public alarm about sex crimes has prompted legislative activity at both the state and federal levels. Since 1994, 49 states have enacted laws requiring sex offenders to register their addresses with state or local law enforcement officials, and 30 states have adopted provisions for notifying citizens of the presence of a sex offender in their community. In December 1995, Public Law 104-71, the Sex Crimes Against Children Prevention Act of 1995, was passed. This act increased penalties against those who sexually exploit children either by engaging in certain conduct or via computer use, as well as those who transport children with the intent to engage in criminal sexual activity. In May 1996, the Violent Crime Control and Law Enforcement Act of 1994 was amended to require the release of relevant information to protect the public from sexually violent offenders who reside in their communities. The act, Public Law 104-145, also known as “Megan’s Law,” requires community notification of the presence of convicted sex offenders. A 1994 survey by the Safer Society, a resource and referral center for sex offender assessment and treatment, indicated that there were 710 sex offender programs in the United States that treated adult pedophiles, rapists, and other sexual offenders. This number represented a 139-percent increase in the number of treatment programs since 1986. Of these, 137 were residential treatment programs (90 being prison-based), and 573 were outpatient or community-based programs. There are three general types of treatment approaches: the organic, biological, or physical approach includes surgical castration, hormonal/pharmacological treatment, and psychosurgery; the psychotherapeutic approach includes individual, group, and familial counseling; and the cognitive-behavioral approach covers a variety of cognitive and skills training methods and includes behavior control techniques. Psychotherapeutic treatment was the primary approach to treating sex offenders before the 1960s. Today, cognitive-behavioral approaches predominate. According to the Safer Society’s 1994 survey, 77 percent of sex offender programs used the cognitive-behavioral approach, 9 percent used the psychotherapeutic approach, and 14 percent used other treatment models. No program reported using the organic model alone as the basis for treatment. Conducting rigorous research on the effectiveness of sex offender treatment is difficult for methodological and ethical reasons. Methodological obstacles include difficulty in selecting a sample of offenders for treatment who are representative of all sex offenders, obtaining adequate comparison or control groups against which to compare offenders receiving treatment, determining how to deal with offenders who withdraw or are terminated from treatment, and determining what criteria to use for judging the success or failure of treatment and information sources to use in making this determination.According to Furby, Blackshaw, and Weinrott (1989), conditions are not often conducive to doing rigorous sex offender treatment research. Rather than designing study samples and data collection procedures to meet the information needs of their studies, evaluators are often forced by short time frames and inadequate funding into using samples and data sources that are readily available. Ethical issues arise when researchers must decide which offenders should be admitted into the treatment program. If treatment is withheld from some eligible offenders, they may be precluded from receiving the benefits of a potentially therapeutic intervention. If treatment is provided to all offenders, then the treatment’s efficacy cannot be well-tested empirically, and scarce resources may be expended on an ineffective program. Comparing alternative treatment conditions is one way to resolve the ethical dilemma. We collected, reviewed, and analyzed information from 22 research reviews on sex offender treatment issued between 1977 and 1996. These reviews were identified through a multistep process that included contacting known experts in the sex offense research field, conducting computerized searches of several online databases, and screening hundreds of studies on sex offender treatment. We sent the list of reviews to seven experts in the field to confirm the comprehensiveness of our list of research reviews. We used a data collection instrument to systematically collect information on treatment settings and types, offender types, recidivism measures, methodology issues, follow-up periods, and conclusions reached from these reviews. (See app. I for a more detailed description of our methodology.) We sent a draft of this report to three of the experts previously consulted to ensure that we had presented the information about the reviews fairly and accurately. Their comments were incorporated where appropriate. We did not send a draft to any other agency or organization because we did not obtain information from such organizations for use in this study. We did our work between October 1995 and March 1996 in accordance with generally accepted government auditing standards. The 22 research reviews covered about 550 studies on sex offenders. Of these studies, 176 were cited in 2 or more reviews, and 26 were cited in 5 or more reviews. Given the widely varying levels of detail provided in the research reviews, we could not always determine whether reference was being made to a study of sex offender treatment or to other types of studies on sex offenders (e.g., recidivism studies on untreated offenders and studies attempting to identify sex offender characteristics). Therefore, we could not precisely determine the total number of studies on sex offender treatment covered in these research reviews. We also did not determine how many studies covered in the 22 research reviews were duplicative in terms of researchers publishing multiple articles based on the same set of data. At least 10 reviews were authored or coauthored by individuals affiliated with a sex offender treatment program. The earliest study included in a research review was published in 1944, the most recent in 1996. Almost all of the research reviews provided narrative assessments of original research studies, with approximately one-half also providing a tabular summary of at least some of the studies covered. Only one review performed a meta-analysis, which is a statistical aggregation of the results from multiple studies to derive an overall quantitative estimate of the effectiveness of treatment. Most research reviews did not restrict their coverage to a single type of treatment, treatment setting, or offender type. Two focused primarily on a specific treatment setting—one on prison-based treatment programs and the other on hospital-based programs. Nine focused primarily on cognitive-behavioral approaches, five on organic treatment, and one on psychotherapeutic treatment methods. Half of the reviews included studies on offenders who committed intrafamilial crimes, while others were not always clear whether the offense was intrafamilial or nonfamilial. In assessing recidivism results, most research reviews considered whether findings were based on official (e.g., parole violation, rearrest, reconviction) or unofficial (e.g., self-report, report from family members) indicators of outcome. When official data sources were described in the research reviews, conviction for a new sex crime was the single most frequently cited recidivism measure. In many cases, however, the review did not specify whether the original study used arrest and/or conviction for a sex or nonsex crime as the recidivism measure. As indicated earlier, sometimes this was because the original study itself was unclear about how recidivism was measured. Some of the research reviews concluded that treated offenders had lower recidivism rates than untreated offenders. Others felt that the studies undertaken were so flawed that no firm conclusions could be drawn. Many reviewers seemed to be somewhere in between. They tended to conclude that, while some recent treatment approaches appeared promising, more rigorous research was needed to firmly establish their effectiveness. These reviewers asserted that the more rigorous research should employ larger and more representative samples of treated and untreated offenders, with longer follow-up periods and with better indicators of recidivism. Eighteen of the 22 research reviews included some discussion of cognitive-behavioral programs, and 12 of the 18 concluded that such programs were at least somewhat effective. These types of programs typically involved satiation, aversion conditioning, covert sensitization, and relapse prevention techniques either used alone or, more often, in combination with one another. Reviewers who concluded that cognitive-behavioral programs were effective often emphasized different components as being the source of their efficacy and differed in terms of what types of offenders they were most effective in treating. One reviewer, for example, concluded that deviant sexual behavior could be reduced by techniques involving covert sensitization, aversion therapy, and a combination of the two. Another set of reviewers concluded that comprehensive cognitive/behavioral programs, particularly when administered to exhibitionists and molesters, held the greatest promise for effective sex offender treatment. The National Research Council reported in 1994 that anger management may be appropriate for dealing with violent individuals, but that “it has not been demonstrated that, in fact, such techniques can alter a long-term pattern of sexually aggressive behavior.” Seventeen of the 22 research reviews discussed organic treatments, and 6 of the 17 concluded that there was some evidence of effectiveness. However, there was no consensus even among these reviewers about a particular drug being most effective, nor about the duration of positive effects from such interventions. Fifteen of the 22 research reviews discussed psychotherapeutic approaches to treatment. None concluded that the various forms of counseling that characterize this approach were sufficient by themselves to substantially alter the behavior of sex offenders. However, a number of reviewers indicated that psychotherapy was useful in diminishing recidivism when used in conjunction with other treatments. Only two reviews attempted to quantify the overall benefit of treatment programs. A 1990 report by the Canadian Solicitor General stated: “A reasonable conclusion . . . is that treatment can be effective in reducing recidivism from about 25% to 10-15%.” The only known and available meta-analysis, or statistical aggregation, of treatment studies to date concluded that “the net effect of the sexual offender treatment programs examined . . . is 8 fewer sexual offenders per 100” (Hall, 1995). Both of these reviews included a range of sex offender types, treatment settings, and programs. They did not identify any particular subgroup of sex offenders for whom treatment was more effective. Most reviewers, even those who were quite positive about the promise of sex offender treatment programs, felt that more work was needed before firm conclusions could be reached. They cited the methodological limitations of studies as the major obstacle to drawing firm conclusions about treatment effectiveness. Even those reviewers who appeared to be among the most positive and optimistic (at least regarding cognitive/ behavioral programs) echoed the general sentiment that “there are no conclusive data available from completely methodologically sound research” (Marshall and Anderson, unpublished). The research reviews found that conclusions about the effectiveness of treatment programs were impeded by methodological weaknesses in the implementation and reporting of the studies. The problems identified may be grouped into three broad categories: (1) limitations in the methodological design of studies, (2) limitations in the recidivism measures used, and (3) limitations in how the studies were reported. Nearly all of the reviews identified weaknesses in the study design as a problem with sex offender treatment research. While numerous design problems were identified, two were most recurrent. Of the 22 reviews, 15 were critical of the absence of comparison or control groups, and 12 were critical of follow-up periods that were inadequate in duration. In addition, 5 were critical of the inconsistent duration of follow-up periods. To meaningfully interpret recidivism results, it is important for an effectiveness study to use a comparison group that is similar on key characteristics to the treatment group. Using a comparison group helps answer such questions as (1) what would recidivism rates have been without treatment and (2) what factors, other than the treatment program alone, may have affected recidivism? For example, such studies may find that treatment volunteers, those with significant community ties, and/or older offenders may have lower recidivism rates, even without treatment, than other types of offenders. Without a comparable no-treatment group of offenders against which to benchmark the results of the treatment group, it is difficult to know how much of an impact, if any, the treatment program had on recidivism. The reviews found that, in the absence of comparison groups, researchers sometimes compared the recidivism rates obtained in their study against those obtained in other studies. However, explanations other than treatment and study characteristics could have accounted for different recidivism rates in these studies. These include differences in sex offense reporting rates, apprehension levels, and prosecutorial policies across different jurisdictions and study periods. Research has shown that sex crimes are underreported and that the longer the follow-up period, the more accurate the assessment of recidivism. One reviewer noted that “Recidivism rates are most meaningful if they cover at least a five-year period, postincarceration” (Becker, 1994), while another suggested that “studies that follow up offenders for periods of as short as 5 years or less may be producing substantial underestimates of true rates of recidivism” (Finkelhor, 1986). Although we cannot be precise about the average length of follow-up because the research reviews did not report it in a systematic fashion, it appears that many of the studies covered in the reviews involved follow-up periods of less than 5 years. Not only can follow-up periods be too short to accurately measure recidivism rates, reviewers also found it difficult to compare the outcomes of different studies because the studies varied in the amount of time they tracked offenders after treatment and no statistical analyses were performed to account for the differences. Studies reported recidivism rates after 3 months, 1 year, 4 years, 15 years, etc. Follow-up periods even varied within a single study. Offenders were reportedly at risk for periods ranging from 1 month to 20 years in a single study. While a short follow-up period may not invalidate comparisons between similar treatment and control groups, the recidivism rate obtained for both groups is likely to be an underestimate of the true recidivism rate, because offenders are more likely to be reported and apprehended for their sex crimes in the long run than in the short run. Many of the reviews identified other weaknesses in the research design of sex offender treatment studies. These weaknesses included selection bias (e.g., program participants were selected because they volunteered, so study results may not have been generalizable to nonvolunteers), the use of small study samples, and failure to consider attrition from treatment in determining how outcome data were analyzed. An ongoing study of institutionalized sex offenders in California was cited by several research reviews and experts in the field as employing a research design that attempts to control for many of the methodological problems besetting other studies. (The design and preliminary findings from this evaluation are described in app. II.) The validity of conclusions about treatment effectiveness is greatly affected by which data sources are used to measure outcome. Given that research has indicated that sex offenses are underreported, that a single data source is likely to be incomplete, and that some data sources are less reliable than others, the fewer and less reliable the data sources on which recidivism measures are based, the greater the likelihood that recidivism rates will be underestimated. Nearly three-fourths of the research reviews pointed out the problem of studies relying on too few data sources to measure recidivism. The reviews criticized studies that relied solely on either official records or offender self-reports to determine whether program participants had reoffended. They stated that both official records and self-reports are likely to contain measurement error. For example, both arrest and conviction records are likely to yield underestimates of recidivism if sex offenses are underreported. Self-report recidivism information may be unreliable. Such limitations in data sources would not affect the scientific validity of comparing the recidivism rates of treated and untreated offenders since both groups would be affected equally. However, these limitations could affect the accuracy of the recidivism estimates. Consequently, it is advisable to use multiple data sources to overcome the weakness of each single data source. The operational definition of recidivism also has a significant bearing on the results obtained from outcome studies. In some cases, recidivism was defined as a rearrest or conviction for a sex offense; in others, it was defined as rearrest or conviction for any offense. In still other cases, recidivism was defined only as a rearrest, or only as a reconviction, with the nature of the crime unspecified. There seemed to be little consensus among reviewers about what an optimal indicator of recidivism would be. As a result, it was difficult to determine whether, and by how much, sex offender treatment reduced recidivism. Nearly half of the reviews indicated some type of limitation in how sex offender treatment studies were reported. The most frequently indicated limitations included inadequate descriptions of the treatment programs, failure to report the criteria used to select study participants, and inadequate descriptions of recidivism measures. In the absence of such information, it is exceedingly difficult to synthesize the state of knowledge of sex offender treatment research. For example, without knowing the contents of a program or how program participants were selected for it, the ability to replicate the study and determine whether results are generalizable is diminished. Without knowing precisely how recidivism was measured in a study renders comparisons between it and other studies meaningless. A substantial number of studies have been done on sex offender treatment effectiveness, many of which were assessed in the research reviews described and synthesized in this report. The most optimistic reviews concluded that some treatment programs showed promise for reducing deviant sexual behavior. However, nearly all reported that definitive conclusions could not be drawn because methodological weaknesses in the research made inferences about what works uncertain. There was consensus that to demonstrate the effectiveness of sex offender treatment more and better research would be required. Copies of this report will be made available to others upon request. The major contributors to this report are listed in appendix IV. Please call me at (202) 512-8777 if you have any questions.
Pursuant to a congressional request, GAO reviewed research results on the effectiveness of sex offender treatment programs in reducing recidivism. GAO noted that: (1) all of the research studies reviewed provided qualitative and quantitative summaries of sex offender treatment programs; (2) nearly all of the studies identified limitations in evaluating treatment effectiveness; (3) there was no consensus as to which treatment reduces recidivism; (4) the cognitive-behavioral treatment approach works well in treating child molesters and exhibitionists, but treatment effectiveness depends on the type of offender and treatment setting; (5) researchers did not engage in comparison studies to measure recidivism rates because of the studies' inconsistent measurements; (6) the research reports lacked sufficient descriptive information on how program participants are selected and recidivism measured; and (7) definitive conclusions could not be drawn about deviant sexual behavior because certain methodological weaknesses have underscored inferences.
Once the most prevalent type of pension plan, defined benefit plans no longer predominate, but they still constitute a significant part of the nation’s retirement landscape. They usually base retirement income on salary and years of service (for example, a benefit of 1.5 percent of an employee’s highest annual salary multiplied by the number of years of service) and are one of two pension types. The other type of pension, called a defined contribution plan, bases benefits on contributions to, and investment returns on, individual investment accounts. Among workers covered by pensions in 1998, about 56 percent were covered only by defined contribution plans (including 401(k) plans), compared with about 14 percent who were covered only by defined benefit plans, and about 30 percent who were covered by both types of plans. Under a defined benefit plan, the employer is responsible for funding the benefit, investing and managing plan assets, and bearing the investment risk. To fund their defined benefit pension plans, companies set up dedicated trust funds from which they cannot remove assets without incurring significant tax penalties. To promote the security of participants’ benefits, the Employee Retirement Income Security Act of 1974 (ERISA), among other requirements, sets minimum pension funding standards. These funding standards establish the minimum amounts that defined benefit plan sponsors must contribute in each year to help ensure that their plans have sufficient assets to pay benefits when due. If plan asset values fall below the minimum funding targets, employers may have to make additional contributions. The financial stability of defined benefit pension plans is of interest not only to workers whose retirement incomes depend on the plan, but also to the cognizant federal agencies and to investors in the companies that sponsor the plans. Federal policy encourages employers to establish and maintain pension plans for their employees by providing preferential tax treatment under the Internal Revenue Code for plans that meet applicable requirements. ERISA established a federally chartered organization, the Pension Benefit Guaranty Corporation (PBGC), to insure private sector defined benefit pension plans, subject to certain limits, in the event that a plan sponsor cannot meet its pension obligations. As part of its role as an insurer, PBGC monitors the financial solvency of those plans and plan sponsors that may present a risk of loss to plan participants and the PBGC. We recently designated PBGC’s single-employer insurance program as high-risk because of its current financial weaknesses, as well as the serious, long-term risks to the program’s future viability. Investors’ interest in pension plans is prompted by the fact that a company’s pension plans represent a claim on its current and future resources—and therefore potentially on its ability to pay dividends or invest in production and business growth. Thus, all three groups—regulators, participants, and investors—need information about these plans. To meet the information needs of the federal agencies that administer federal pension laws, ERISA and the Internal Revenue Code require the filing of an annual report, which includes financial and actuarial information about each plan. The PBGC, the Department of Labor, and the Internal Revenue Service (IRS) jointly develop the Form 5500, Annual Return/Report of Employee Benefit Plan, to be used by plan administrators to meet their annual reporting obligations under ERISA and the Internal Revenue Code. Plan administrators of private sector pension and welfare plans are generally required to file a Form 5500 each year. The filing includes a short document for identification purposes and general information, plus a series of separate statements and schedules (attachments) that are filed as they pertain to each type of benefit plan. This form and its statements and schedules are used to collect detailed plan information about assets, liabilities, insurance, and financial transactions, plus financial statements audited by an independent qualified public accountant, and for defined benefit plans, an actuarial statement. More than 1 million of the forms are filed annually, of which approximately 32,000 represent defined benefit pension plans insured by PBGC. The information on the form is made available to plan participants upon request and serves as the basis for a summary annual report provided to plan participants and their beneficiaries. One part of the Form 5500 filing, called Schedule B, includes information about a defined benefit pension plan’s assets, liabilities, actuarial assumptions, and employer contributions. The various measures of plan assets and liabilities are required by ERISA and the Internal Revenue Code to determine whether plans are funded according to the statutory requirements. Specifically, under Schedule B, IRS requires, among other things, the disclosure of assets and liabilities and an expected rate of return, which is called the valuation liability interest rate. IRS reviews this information to ensure compliance with the minimum funding requirements for pension plans. In addition, according to PBGC officials, PBGC may use Schedule B information to help them identify plans that may be in financial distress and thus represent a risk to the insurance program and plan participants. Some plan sponsors also use information in the Schedule B to calculate certain insurance premiums they pay to PBGC. In addition to the annual reporting requirement, PBGC has authority to require plans to provide the agency with detailed financial information. Specifically, if a company’s pension plans reach a certain level of underfunding in aggregate, ERISA requires the company to provide information to PBGC in what is called a 4010 filing. The 4010 filing includes proprietary information about the plan sponsor, its total pension assets, and its total benefit obligations were the company to terminate its pension plans immediately. However, under current law, PBGC is not permitted to disclose this information to the public. The Securities Exchange Act of 1934 requires publicly traded corporations to annually file a 10-K report, which is often referred to as the corporate financial statement, with shareholders and the Securities and Exchange Commission (SEC). The SEC uses 10-K reports to ensure that companies are meeting disclosure requirements so that investors can make informed investment decisions. The 10-K report describes the business, finances, and management of a corporation. For companies whose defined benefit pension plans are material to their financial statements, accounting standards require a footnote to the financial statements that details the cost, cash flows, assets, and liabilities associated with these plans. Footnote disclosures provide more detailed information about data presented in the company’s financial statements. Standards for reporting this information are set by the Financial Accounting Standards Board. Actuaries estimate the present value of pension liabilities using economic and demographic assumptions. These assumptions are needed to estimate the amount of money required now and in the future to meet a pension plan’s future benefit obligation. Economic assumptions include rates of inflation, returns on investments, and salary growth rates. Demographic assumptions include changes in the workforce from retirement, death, and other service terminations. Most actuarial assumptions for measuring pension plan funding are not specifically prescribed by law or subject to advance approval from the IRS or any other government agency. However, ERISA requires the plan actuary to select assumptions that are individually reasonable and represent the actuary’s “best estimate of anticipated experience under the plan.” The pension plan financial information reported in Form 5500 Schedule B serves a different purpose from the pension information disclosed in corporate financial statements. The information in each source is subject to different reporting requirements; therefore, measurements of pension funding are unlikely to be the same in the two reports. Government regulators and others use Form 5500 information for many purposes, including to determine whether plans are meeting minimum funding requirements and required contributions for each defined benefit plan that a company sponsors, while financial analysts and investors use pension information in corporate financial statements to determine how the company’s plans in aggregate affect its overall financial position, performance, and cash flows. Because of their different purposes and reporting requirements, these two sources use different measures and assumptions to generate information. For example, in providing information about the values of their pension assets and the present value of their future pension obligations, the Form 5500 and the corporate financial statements often base their valuations at different points in time and use different methods of calculation. Both of these reports also include an assumption about rates of return on the investment of pension assets. However, these rates may differ, and this assumption serves a different purpose in each report. As a result of such differences, information in the two reports is generally not similar, and because the two sources of information use similar terminology—for example, both refer to asset values and investment returns—the results can appear contradictory. One objective of the Form 5500 is to provide financial and other information about the operations of an individual employee benefit plan. For defined benefit pension plans, the Form 5500 Schedule B provides measures of plan assets and liabilities; actuarial information, such as economic assumptions and demographic assumptions about plan participants; and information about how much the plan sponsor is contributing to meet ERISA funding requirements. If a company sponsors more than one plan, it must file a Form 5500 for each plan. While analysts and investors may use this information, it is primarily used by federal regulators to measure plan funding and ensure compliance with applicable laws and regulations. The pension information in a company’s financial statement, by contrast, primarily serves a different purpose. The financial statement is intended to provide financial and other information about a company’s consolidated operational performance as measured primarily by earnings. In this context, pension information is mostly provided in a footnote to give financial statement users information about the status of an employer’s pension plans and the plans’ effect on the employer’s financial position and profitability. For example, certain details about the company’s annual cost of providing pension benefits are presented in the pension footnote disclosure because this cost, or expense, affects the company’s profitability. The users of corporate financial statements are primarily financial analysts and investors who are trying to assess the company’s financial condition, profitability, and cash flows, and whose concern is not so much the financial condition of individual pension plans but the effect that the company’s pension obligations may have on its future cash flows and profitability. Even where the Form 5500 and corporate financial statements provide similar types of information, such as pension assets and liabilities, their values usually differ. Among the key reasons for this are different dates of measurement, different definitions of reporting entity, different methodologies for determining costs of benefits, and different methods of measuring assets and liabilities. Table 1 summarizes some of the differences. These differences in asset and liability measurements can result in significantly divergent results for the Form 5500 and the corporate financial statements. As an example, table 2 shows the different asset and liability values presented in a plan’s Form 5500 filing and in the plan sponsor’s corporate financial statements for a Fortune 500 company in 1999-2001 and the resulting effects on the reported pension funding ratios (pension assets divided by pension liabilities). One reason for the significant differences in measures of assets and liabilities between the Form 5500 and corporate financial statement filings in table 2 is that the company sponsors more than one pension plan. When companies sponsor multiple pension plans, the details of specific plans are generally aggregated in corporate financial statements to show their net effect on the plan sponsor and are not intended to provide details about the funding of each plan. Thus, the pension information of a sponsor with both underfunded and overfunded plans may show little or no funding deficiencies, although the consequences to participants in the underfunded plans could be quite severe in the event of plan termination. One of the most confusing aspects of these two information sources is their difference with regard to the expected rate of return on pension assets. The expected rate of return is the anticipated long-term average investment return on pension assets. The Form 5500 Schedule B and the corporate financial statements both use an expected rate of return in calculating financial information about pension plans. In this regard, the expected rate of return is one of many assumptions, such as inflation and mortality rates, that affect a key pension reporting measure. In theory, the expected rates of return reported in each source should be similar because the assumption is derived from similar, or even the very same, assets. However, between these two sources there are differences in the rate’s purpose, selection, and method of application that may, in fact, contribute to differences between the assumed rates of return used in the two sources. Key differences in expected rates of return between the reports are shown in table 3. In Form 5500 Schedule B, the expected rate of return is used to calculate pension funding—that is, the measurements of pension assets and liabilities, which determine whether, and in fact, what amount the company needs to contribute to its pension plan to meet the statutory minimum funding requirements. The expected rate of return is usually derived from the pension plan’s investment experience and assumptions about long-term rates of return on the different classes of assets held by the plan. Actuaries calculate a present value of plan liabilities using the expected rate of return, which is called the valuation liability interest rate on the Form 5500 Schedule B. If plan liabilities exceed assets, the resulting difference is used, in part, to determine the amount the company may have to contribute to the pension plan for that year. The amount of contributions required under the minimum funding rules of the Internal Revenue Code is generally the amount needed to fund benefits earned during that year plus that year’s portion of other liabilities that are amortized over a period of years. Amendments to ERISA in 1987 and 1994 made significant changes to the funding rules, including the establishment of a deficit reduction contribution requirement if plan funding is inadequate. The 1987 amendments to ERISA established the current liability measure, which is based on a mandated interest rate rather than a rate selected at the discretion of the plan actuary. For financial statements, the expected rate of return is used to calculate the annual expected investment return on pension assets, which factors into the measurement of pension expense. Pension expense represents the company’s cost of benefits for the year and generally includes (1) service cost—benefits earned by plan participants for a period of service; (2) interest cost—increases in the benefit obligation because of the passage of time; (3) expected returns on pension assets, which offset some or all of the net benefit costs; (4) amortization of prior service cost resulting from plan amendments; and (5) amortization of gains or losses, if any, that may result from changes in assumptions or actual experience that differs from assumptions. To calculate a dollar amount for the expected return, the expected rate of return is multiplied by the value of pension assets. This expected return is used instead of the actual return on pension assets in the calculation of pension expense, which has the effect of smoothing out the volatility of investment returns from year to year. Furthermore, if the expected return on plan assets is high enough, a company may report a negative pension expense—or pension income. Form 5500 reports and, until recently, corporate financial statements have not provided specific information about how expected rates of return are selected. Actuaries told us that they estimate rates of return on the basis of several economic forecasting measures and also take into account how asset allocations may change in the future based on the demographics of plan participants. In contrast, financial analysts and actuaries told us that many companies select their expected rates of return on the basis of their pension asset returns in past years. However, in December 2002 a Securities and Exchange Commission staff member publicly stated that the SEC would likely review expected rates of return higher than 9 percent if the rate was not clearly justified in the company’s financial statement. The SEC determined the 9 percent rate on the basis of studies on the historical returns on large-company domestic stocks and corporate bonds between 1926 and the first three quarters of 2002. According to actuaries and financial analysts we spoke with, this statement by the SEC has been a primary factor in the selection of lower rates of return in 2002. Figure 1 shows the average expected rates of return reported for 1993 to 2002 by the companies and their pension plans in our sample. Both the Form 5500 Schedule B and corporate financial statements have limitations in the extent to which their required information meets certain needs of regulators, plan participants, and some investors. The Form 5500 takes considerable time for companies to prepare and for federal agencies to process, so it is not available to pension plan participants and others on a timely basis. The required asset and liability measures in the Form 5500 Schedule B are used by regulators to monitor compliance with statutory funding requirements. However, these funding measures are not intended to indicate whether plans have sufficient assets to cover all benefit obligations in the event of plan termination. In addition to using the Form 5500, regulators can also use corporate financial statements to try to determine whether a plan sponsor will be able to meet its obligations to its pension plans. However, some investors have concerns about whether corporate financial statements accurately reflect the effect of pensions on plan sponsors. According to financial analysts we spoke with, the pension information in corporate financial statements is also limited because it has not, until recently, included key disclosures, and the methodology used to calculate pension expense does not reflect the potential impact of actual investment returns on a company’s future cash flows and profitability. However, others argue that the current accounting for pension expense is appropriate for reflecting the long-term nature of pension obligations and their effect on the plan sponsor. Information in the Form 5500 is plan-specific and identifies the value of assets a plan must have to comply with ERISA funding requirements. However, this information is at least 1 to 2 years old by the time it is fully available, making it an unreliable tool for determining a plan’s current financial condition. The value of plan assets can significantly change over this period of time, and the value of plan liabilities may also change because of changes in interest rates, plan amendments, layoffs, early retirements, and other factors. For plans that experience a rapid deterioration in their financial condition, the funding measures required in Form 5500 may not reveal the true extent of a plan’s financial distress to plan participants and the cognizant federal agencies. The Form 5500 Schedule B information is not timely for three main reasons. First, the plan’s assets and liabilities can be measured at the beginning of the plan fiscal year instead of the end of the year, resulting in information that is over a year old when the Form 5500 is filed. In 2001, of the 61 companies in our sample with both Form 5500 and corporate financial statement data, at least 48 used the beginning of the plan sponsor’s fiscal year for the plan’s measurement date. Second, ERISA allows plan sponsors 210 days, plus a 2½ month extension, from the end of the plan fiscal year to file their Form 5500. According to PBGC officials and actuaries we spoke with, most plans file at the extension deadline, almost 10 months after the end of the fiscal year, and almost 2 years from the measurement date if it is the beginning of the fiscal year. Third, according to Department of Labor officials, it has taken an average of 6½ months to process Form 5500 filings, though actuaries and PBGC officials told us that recently, some processing has been completed within 1 to 2 months of receiving the forms. Even with this improvement in processing time, most large companies’ 2003 pension data in Form 5500 will be based on valuations as of January 1, 2003, and will not be available to the public until January 2005. There are several difficulties in making the filing of Form 5500 reports more timely. According to actuaries we spoke with, collecting and preparing the necessary information is time-consuming and resource- intensive for plan sponsors. Large companies’ human resource data are often not well organized for this purpose, according to two pension experts we spoke with. Common problems include merging information from different databases, dealing with retiree data that may not be computerized, and identifying vested participants who have left the company. The data collection and analysis becomes much more complicated when companies go through mergers, acquisitions, or divestitures. According to one senior pension actuary we spoke with, data preparation efforts can consume as much as 75 percent of the time involved in preparing the Form 5500 filing. Other issues include scheduling the work of auditors and actuaries who must review and work with the information once it has been assembled. Once the forms are completed and submitted to the Department of Labor, speeding up the processing also has complications. While the process is significantly faster now than it used to be, it depends on paper rather than electronic filing and is slowed because the Form 5500 is also used for defined contribution and welfare benefit plans. Only about 32,000 of the more than 1 million Form 5500 filings pertain to PBGC-insured defined benefit plans, and the filings for defined benefit plans are not readily identifiable in order to receive priority when the Department of Labor processes these forms. Additionally, if errors in the Form 5500 filing are identified, the filing is returned to the plan sponsor with a 30-day deadline for making corrections and refiling. If errors are not properly corrected in the first response, the administrator is notified and given an additional 30 days to correct the amended filing. A second limitation in Form 5500 is that it is not required to furnish information about the ability of a plan to meet its obligations to participants if it were to be terminated. Compliance with ERISA funding rules, as reported in Form 5500, is often based on the plan’s current liability, which is the sum of all liabilities to employees and their beneficiaries under the plan. In theory, keeping a pension plan funded up to its current liability will ensure that the plan has assets to meet its benefit obligations to plan participants as long as the plan sponsor remains in business. However, if a plan is suddenly terminated because of its sponsor’s financial distress, the plan liabilities are likely to increase and plan assets are less likely to cover the cost of all benefit obligations. Therefore, Form 5500 information often does not accurately indicate the ability of the plan to meet its benefit obligations to plan participants in the event that the plan sponsor goes bankrupt. A different measure, called the termination liability, comes closer to expressing the pension plan’s cost of discharging the promised benefits to participants in a distress termination. The termination liability, which is usually higher than current liability, reflects the cost to a company of paying an insurer to assume its pension obligations were the plan to be terminated. PBGC has found no simple relationship between measures of current and termination liability, and therefore a fixed set of factors cannot be applied to the plan’s current liability funding level (or its components) to estimate termination liability. For plans whose vested benefits are underfunded by at least $50 million, PBGC receives a termination liability measure in a separate filing called a Section 4010 filing (named after the ERISA section that requires companies to submit such reports). However, this information is available only to PBGC and by law may not be publicly disclosed. The differences in the two types of liability measures are substantial enough that a plan can appear in reasonable condition under the current liability measure that serves as the basis for the minimum funding standard, but not have sufficient resources to settle plan termination liabilities. For example, Bethlehem Steel’s pension plan was 97 percent funded on a current liability basis in its 1999 Form 5500 filing. However, when the plan was terminated, in December 2002, it was funded at only 45 percent on a termination liability basis. Plan terminations often result from a plan’s sponsor entering bankruptcy, which, according to PBGC officials, cannot usually be predicted more than a few months in advance. Some of the reasons that a plan can have a reasonable ratio of assets to liabilities under the current liability measure but a less than adequate ratio under the termination liability include Different retirement ages. When companies shut down, many long-time employees retire and begin collecting pension benefits at an earlier age. Different discount rates. Termination liability discount rates have usually been lower than for current liability in recent years, making the present value of termination liability larger. Different plan provisions. Terminations may coincide with factory shutdowns, which often trigger provisions that increase retirement benefits. While the information about pensions in corporate financial statements does not serve the same purpose as the information in Form 5500, it can also be useful to the PBGC. This information is useful in two primary ways: Its overall measures of the company’s financial condition provide indications about the company’s ability to meet its pension obligations. According to PBGC officials, most large plans that were terminated by PBGC were sponsored by companies whose debt was rated below investment grade for a number of years prior to plan termination. Though plan asset-to-liability ratios are not dependent on the health of the plan sponsor, participants in underfunded pension plans at financially distressed companies face the risk that the plan sponsor will lack the cash resources to meet the ERISA funding requirements. In contrast, a company in a strong financial position is much more likely to be able to make up funding shortfalls. It provides the timeliest public data about pension plans, which may be useful if the company sponsors only one pension plan. Within 60 to 90 days from the end of their fiscal years, publicly traded companies must file their financial statements, which provide data based on measurements on the last day of a company’s fiscal year. Some primary users of corporate financial statements have expressed concerns about the extent to which these reports show how pension plans affect a company’s profitability, cash flow, and financial position. This information is particularly important for companies with large pension plans because the greater the value of a company’s pension assets relative to the company’s market value, the more sensitive its cash flows and profits will be to changes in pension asset values. According to analysis by Standard and Poor’s (S&P), a leading corporate debt rating agency, defined benefit pension plans significantly affect the earnings of about half of the companies in the S&P 500 index. As conveyed to us by financial analysts, investors’ concerns about financial reporting on pensions have been twofold: First, financial statements have heretofore lacked adequate disclosures about how pension plans affect the sponsoring companies’ cash flow and overall risk. Second, some investors believe that current standards for measuring pension expense do not adequately recognize the financial condition of pension plans and distort measures of company earnings. However, others argue that these standards provide a more appropriate accounting of a company’s annual pension costs over the long term. Disclosure concerns, to date, have been of two main types: First, according to financial analysts we spoke with, it has been difficult to reasonably estimate a pension plan’s claims on a company’s cash resources in the coming year and near future. Contributions are determined primarily by ERISA funding requirements, but the plan funding status reported in the Form 5500 is not current enough to be used by financial analysts and investors. Large required contributions to pension plans can reduce the cash available to companies to apply to shareholder dividends or invest in their business so that profits may continue to grow. For industries in which investors are concerned primarily with a company’s cash flow, estimates of such future contributions would be critically important, but have been unavailable to date. Second, to date it has been difficult to accurately evaluate the risk that pension investments pose to the plan sponsor. The allocation of pension assets can pose additional risk to the company’s cash flow and profitability, especially for companies with very large pension plans. Investments in more volatile assets, such as equities, are likely to create a wider range of potential cash contributions for the company in the future, as companies may need to make contributions to meet statutory funding requirements following negative returns on pension assets. In addition to raising these disclosure concerns, some financial analysts and investors have also expressed opposition to current accounting standards for measuring pension expense, while others support these standards. Pension expense is included in the calculation of corporate earnings, which investors use to track a company’s performance, in comparison both with other firms and with a company’s own past performance. In order to reduce the potential volatile effect of pension plans on their sponsors’ earnings, the accounting standards call for three main smoothing mechanisms to calculate pension expense: (1) expected return is used instead of actual return on pension assets, (2) the expected return may be based on an average value of pension assets rather than their current fair value, and (3) differences between actual experience and assumptions are recognized systematically and gradually over many years rather than immediately when they arise. Therefore, when the expected return on pension assets significantly differs from the actual return, this difference does not immediately affect a company’s reported pension cost or earnings. As actual experience differs from assumptions on such things as expected rates of return, inflation rates, and plan participant mortality rates, the differences are added to or subtracted from an account for unrecognized gains or losses. When unrecognized gains or losses exceed 10 percent of either the market-related value of pension assets or the projected benefit obligation, whichever is greater, the company must factor a fraction of the excess unrecognized gain or loss (difference between the total gain or loss and the 10 percent threshold) into its calculation of pension expense. For example, a company may experience 3 years of unusually high gains on its pension assets, and at the end of year 3, the cumulative difference between expected and actual returns on pension assets surpasses the minimum threshold for recognition of the difference. The company must record part of its unrecognized cumulative gain in its calculation of pension expense, thereby decreasing the pension expense for the year. Although actual and expected rates of return may differ sharply in any given year, or even over 2 to 3 years, the variance between them should decrease over the longer term, provided that expected rates of return are reasonably accurate. Table 4 shows the results of our comparison of expected and actual rates of return for 52 companies from our sample of Fortune 500 companies that had data available over the 9 years from 1994 through 2002. During this period the average expected rate of return used in the financial statements was 9.29 percent, while the average actual rate of return was 7.56 percent and ranged from a low of –8.85 percent to a high of 22.36 percent in any given year. In comparison with the results of our analysis, a study by one investment bank revealed an average actual return on pension assets of greater than 12 percent between 1985 and 1998. Therefore, average actual rates of return will vary according to the time period being measured. For example, for the companies in our sample, the average actual return on pension assets was 15.25 percent from 1997 through 1999 and –3.59 percent from 2000 through 2002. Opponents of current methods of accounting for pension expense argue that the smoothing mechanisms lack transparency because reported pension expense (1) does not reflect the current financial condition of pension plans and (2) distorts measures of corporate earnings. Under the current methodology for calculating pension expense, the cumulative net effect of pension asset gains or losses may not be reflected in reported pension expense for a few years, if at all. While alternating years of gains and losses may keep reported pension expense relatively smooth from year to year, consecutive years of gains or losses can eventually result in significant changes in reported pension expense. Many companies that reported pension income in 2001 and 2002, while their pension assets were in fact decreasing in value, benefited from the use of the market-related value of pension assets (the average asset values over not more than the previous 5 years) rather than the lower actual value of these assets. For example, of the 97 companies in our sample, 26 reported net pension income in 2002, but only one of these companies saw an increase in the value of its pension assets. Conversely, it is likely that many of these companies will report net pension expenses in the next few years, even if their pension asset values rise, because their market-related values of pension assets will reflect, in part, the decline in the stock market between 2000 and 2002. In contrast, employer contributions, which are only indirectly related to pension expense, may better reflect the current financial condition of pension plans. Employer contributions to pension plans are determined by a complex set of factors, including the tax deductibility of contributions, minimum funding requirements, the employer’s expected cash flows, and PBGC premiums. In 2002, when most large companies saw declines in their pension asset values, many were required to make contributions to their pension plans to meet the statutory funding requirements. The 93 Fortune 500 companies in our sample with available financial statement data reported aggregate contributions to pension plans of $10.1 billion in 2002, while their aggregate pension expense totaled $622.6 million. Financial analysts pay close attention to companies’ cash contributions to pension plans because large contributions to plans represent resources that companies will not have available to use for other purposes, such as expanding their businesses. Investors have also been concerned about the extent to which defined benefit pensions contribute to a company’s total profits. According to one investment bank study, 150 of the 356 Fortune 500 companies with defined benefit plans reported net pension income (negative expense) in their financial statements in 2001. However, the value of pension assets for 313 of these companies actually declined in 2001. To try to address these apparent inconsistencies in the financial reporting on pensions, many financial analysts and investors try to strip out the effects of pensions to determine a “true” measure of a company’s earnings that reflects its performance from ongoing operations. For example, Standard and Poor’s issued a proposal in 2002 to standardize measures of corporate earnings that excludes several items from the earnings calculation, including investment returns on pension assets. Proponents of current pension accounting standards argue that the smoothing mechanisms are beneficial because (1) pension obligations are long-term liabilities that do not have to be funded all at once and (2) sponsoring pension plans and investing plan assets are not the primary business activities of plan sponsors. Pension obligations are normally paid out over a long period of time; therefore, pension assets have a similar time period to meet those obligations. The smoothing mechanisms allow plan sponsors to gradually and systematically attribute portions of the long-term cost of pension plans to each year. Without smoothing mechanisms, companies would potentially face year-to-year fluctuations in their reported pension expense that some investors may also consider misleading given that unexpected losses on pension assets in one time period may be offset by unexpected gains in another. The Financial Accounting Standards Board adopted the smoothing mechanisms in part to reduce the volatility of reported earnings caused by investment returns on pension assets. Because investing pension plan assets is not the primary business activity of plan sponsors, FASB determined that earnings volatility caused by immediately recognizing all changes in the value of plan assets and liabilities as they occur would be misleading to investors. Furthermore, such volatility could make comparisons of earnings more difficult when looking at different firms, some of which may not sponsor defined benefit plans. Both in this country and abroad, changes have been proposed—and in a few cases, implemented—to make information about defined benefit plans more transparent or complete. These changes relate to information associated with both Form 5500 and corporate financial statements. The administration has proposed augmenting current Form 5500 information by making available certain data that currently are not made public, such as measurements of termination liabilities. The Financial Accounting Standards Board has recently amended one of its accounting standards to require, among other things, that companies provide more information about the composition and market risk of their pension plan assets and their anticipated contributions to plans in the upcoming year. Outside the United States, proposals are being discussed that would move toward eliminating or reducing the use of smoothing mechanisms in calculating pension expense. In July 2003 the Department of the Treasury announced “The Administration Proposal to Improve the Accuracy and Transparency of Pension Information.” The proposal presented four areas of change, and one of them would broaden the public’s access to pension information currently available only to PBGC. The proposal would expand the public’s access to pension information in two main ways: Reporting termination liability. Under the proposal, information about a plan’s termination liability would be included in ERISA-required summary annual reports to workers and retirees. The annual reports, which are based on data from the Form 5500 reports, now report the plan’s financial condition based on the plan’s current liability. Termination liability information is reported to PBGC by companies whose plans are collectively underfunded by more than $50 million. Public disclosure of underfunding of at least $50 million. Section 4010 of ERISA requires companies with more than $50 million in aggregate plan underfunding to file annual financial and actuarial information with the PBGC. This information is reported separately from Form 5500 information and must generally be filed no later than 105 days after the end of the company’s fiscal year. PBGC uses this information to monitor plans that may be at greater risk of failure, but under current law, PBGC cannot make the information public. According to the administration, the information is more timely and better in quality than publicly available data. Under the proposal, the market value of assets, termination liability, and termination funding ratios contained in these reports would all be publicly disclosed. Since the announcement of the administration’s proposal, no further action has been taken by either the administration or Congress to implement these proposals. In regard to corporate financial statements, one change designed to address users’ concerns about pension-related information has recently been enacted. In December 2003 FASB issued a revision to its accounting standard on pension disclosures. The revised standard incorporates all of the disclosures required by the prior standard and requires more informative pension disclosures. FASB added the new disclosures because users of financial statements, such as financial analysts, requested additional information that would assist them to evaluate, among other things, the composition and market risk of the pension plan’s investment portfolio and the expected long-term rate of return used to determine net pension costs. As a result, some of the new disclosure requirements include listing the percentage of pension assets invested in major asset classes such as equity securities, debt securities, real estate, and other assets. Companies must also provide a narrative description of the basis used to determine the overall expected rate of return on assets assumption. The FASB believed that this new information would allow users to better understand a company’s exposure to investment risk from its pension plans and the expected rate of return assumption. Another required disclosure is the employer’s estimated contribution to pension plans in the following year. However, the revised standard does not change the general approach used in the financial statements of aggregating this information across all pension plans. Outside the United States, other standard-setting boards have been addressing issues related to the use of smoothing techniques designed to smooth out the volatility of reported pension expense. One of these boards is the International Accounting Standards Board (IASB), an independent accounting standard–setting organization. Many countries require publicly traded companies to prepare their financial statements in accordance with IASB’s financial reporting standards. IASB’s current pension accounting standard is very similar to the current standard promulgated by FASB, in that both allow a smoothing mechanism to reduce the volatility of pension expense. However, IASB is considering a revision to its current standard to allow companies to calculate pension expense using actual investment returns instead of expected returns, on a voluntary basis. According to the IASB manager in charge of the project, the exposure draft will be issued in 2004, and a final standard is expected in March 2005. Separately from the IASB action, the United Kingdom’s Accounting Standards Board has already issued its own accounting standard that would require companies to report the differences between actual and expected returns on pension assets in their financial statements. Full adoption of this standard has been delayed out of concern that the changes made by firms to comply with the Accounting Standards Board standard might need to be modified again in subsequent years to meet a potentially different IASB standard. Form 5500 reports and corporate financial statements both provide key pension financial data, but they serve different purposes and, as a result, provide significantly different information. To date, neither report in isolation provides sufficient information for certain users to fully determine the current financial condition of an individual pension plan or how pension obligations could affect the financial health of the plan sponsor. While particular concerns have been raised about differences between expected and actual pension asset rates of return reported on corporate financial statements, expected rates of return do not have a significant effect on the actual financial condition of plans. Continued concerns about the financial condition of plans and how this information is disclosed have been highlighted by the administration’s proposal to provide information about funding in the event of plan termination to plan participants and regulators. We have previously reported that an essential element of pension disclosure should include requiring plans to calculate liabilities on a termination basis and disclosing this information to all participants annually. Likewise, we recommended that Congress consider requiring that all participants receive information about plan investments and the minimum benefit amount that PBGC guarantees should their plan be terminated. While such new requirements could help improve the transparency of pension plans’ financial condition, there are other challenges to be addressed as well. For example, plan participants and regulators continue to need more timely information. However, there appear to be few opportunities to improve the timeliness of Form 5500 information under the current statutory reporting requirements. One challenge to improving the timeliness of this information on pensions will be to find a solution that does not impose undue burdens on plan sponsors. Resolving this challenge will prove crucial to providing policy makers, plan participants, and investors with more timely and transparent information on the financial condition of defined benefit plans. We provided a draft of this report the Department of the Treasury, the Department of Labor, the Pension Benefit Guaranty Corporation, the Securities and Exchange Commission, and the Financial Accounting Standards Board. We received technical comments from each agency that we incorporated as appropriate. We are sending copies of this report to the Secretary of Labor, the Secretary of the Treasury, the Executive Director of the Pension Benefit Guaranty Corporation, the Chairman of the Securities and Exchange Commission, the Chairman of the Financial Accounting Standards Board, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report please contact me at (202) 512-7215 or George Scott at (202) 512-5932. Other contacts and acknowledgments are listed in appendix III. To explain the two sources of pension financial information, we interviewed federal agency officials from the Pension Benefit Guaranty Corporation (PBGC) and the Department of Labor (DOL). These federal agencies use Form 5500 information in performing their oversight and monitoring responsibilities. In addition, we reviewed the Form 5500 instructions, form, and schedules to understand the information they provide. For the financial statements, we reviewed relevant accounting standards from the Financial Accounting Standards Board, which sets standards for financial statements, and spoke with board officials. We also reviewed many financial statements of large domestic companies. Our work also included analyses of a sample of corporate financial statements of Fortune 500 companies and the corresponding Form 5500 filings for those companies with available data. We chose to sample from the universe of publicly owned Fortune 500 companies with defined benefit plans because (1) the pension plans sponsored by these firms represent a large percentage of the total private defined benefit pension plan participants, assets, and liabilities in the United States; (2) these firms tend to have the largest defined benefit plans, and if these plans fail they would create the largest burdens for PBGC and possibly the government; and (3) most of these firms are publicly traded, so their corporate financial statements are publicly available. We drew a systematic random sample of 100 of the 2003 Fortune 500 companies with defined benefit plans, after excluding government- sponsored entities. The sampling process accounted for the companies’ revenues in 2002 and the distribution of expected rates of return on pension assets. This distribution was available in a Compustat database for approximately 290 of the 329 companies in the population. From the initial sample of 100 companies, 3 companies were removed because one is not publicly traded, another is European and filed its financial statements in euros, and the third changed its end-of-fiscal-year date in 2001, which made it more difficult to compare with other firms. For the 97 remaining companies, we obtained as much as 10 years of pension data from these companies’ corporate financial statements using the Securities and Exchange Commission’s Electronic Data Gathering, Analysis, and Retrieval System (EDGAR), depending on data availability. Data were available from 1993 through 2002 for 68 companies. However, others did not have 10 years of data available because, for example, they were formed, or only began to publicly trade their stock, at some time between 1993 and 2002. Nine companies in our sample reported more than one expected rate of return for their pension plans and a weighted average could not be determined accurately. Most of these companies sponsor pension plans for employees outside the United States and provide separate assumptions for domestic and international plans. These companies did not report the weighted average expected rate of return that was used to calculate their expected return on pension assets. We also obtained the corresponding Form 5500 filings for the companies in our sample from PBGC for plan years 1993 through 2001. To identify these filings, we matched the sample of 97 Fortune 500 companies to their pension plans on the basis of their employer identification number (EIN). An EIN, known as a federal tax identification number, is a nine-digit number that the IRS assigns to organizations. We developed a list of EINs reported on the companies’ financial statements and provided this list to the PBGC. PBGC matched the EINs to their Form 5500 database and provided information to us. However, in several cases, PBGC did not find matches to our list of EINs, either for all 10 years or for just some of the years. Based on the number of companies with data available in any of the 10 years, we decided on a threshold of 7 years’ worth of data in order to achieve a sample size that would allow us to compare data over most of the 10-year period. In other words, to be included in this analysis, a company must have at least 7 years of Form 5500 data. One hundred and fifty plans had at least 7 years of Form 5500 data. The years in which data were missing were spread sporadically among the 10-year period covered in this analysis. Before deciding to use the Form 5500 data, we investigated its reliability. Prior to plan year 1999, the Internal Revenue Service was responsible for keypunching Form 5500 information into a database, and DOL officials explained that some of the data contained errors. DOL officials explained that since plan year 1999, Form 5500 data have been recorded with optical scanning devices and have been subject to edit and validity tests. In 1999, some Form 5500 filings were not captured because many plan administrators did not send forms on the correct paper and the scanner could not capture some information. However, DOL officials explained that this problem has not occurred since. We obtained the Form 5500 data from PBGC’s Corporate Policy and Research Department. PBGC officials explained that as errors surface in their use of the Form 5500 data, corrections are made to PBGC’s database. In the past hard copies of original Form 5500 filings were obtained for making corrections. Today PBGC can view electronic images of the actual plan filings. As PBGC receives the data on Form 5500 Schedule B, it screens the data for errors, particularly in the asset and liability fields. Information we received from pension actuaries corroborates the data we used from Form 5500 filings and the data on expected rates of return, as presented in figure 1, show consistency from year to year. Taking all these factors into consideration, we feel that the data we used were sufficiently reliable for the purpose of differentiating between expected rates of return reported in Form 5500 filings and corporate financial statements. In obtaining financial information for the sample companies, we had to account for companies that had merged with another company during the 10-year period under review. In the event of a merger between a company with a defined benefit pension plan and a company without a defined benefit plan, we selected the company with the defined benefit plan. In the case of a merger between two companies that both had defined benefit plans prior to the merger, we selected the company indicated by the EDGAR database as the predecessor company. While our sample is designed to represent our population for calendar year 2002, it is not representative of any population in prior years. The makeup of the Fortune 500 changes from year to year, and our method of tracking the same companies across several years precludes us from making specific statements about any larger population prior to 2002. Thus, while we believe that the trends identified in our sample could be indicative of trends in the population of large firms with defined benefit plans and that this supposition is supported by other studies, we do not claim these trends are representative of past populations of Fortune 500 companies. To explain the usefulness and limitations of the information from the two information sources, we interviewed expert users of pension information in Form 5500 reports and corporate financial statements, including federal officials from DOL, PBGC, and SEC; pension actuaries; corporate debt rating agency officials; financial analysts; and Financial Accounting Standards Board officials. Some experts explained the uses of information available in the Form 5500 reports and limitations of these reports. Other experts described and shared documentation about how they analyze financial statements to understand the impact of pension plans on the plan sponsors’ financial statements. Some experts explained the need for additional pension information in companies’ financial statements. As part of our review of pension information in corporate financial statements, we used several research reports published by different investment banks. We reviewed the methods used in these studies and found them to be sufficiently reliable for the purpose of corroborating our own data analysis and illustrating trends in pension accounting. To explain the recent and proposed changes to the current information sources, we interviewed officials from the International Accounting Standards Board and the Financial Accounting Standards Board. These boards set standards for financial statements for international companies and United States companies, respectively. We also reviewed the recently revised accounting standards issued by the Financial Accounting Standards Board. We also reviewed congressional testimony regarding the administration’s proposal for more transparency of pension data. We conducted our work between January 2003 and January 2004 in accordance with generally accepted government auditing standards. This appendix shows two things: (1) how a company may experience an actual loss on its pension assets while reporting income from its pension plans in the same year and (2) how a change in the expected rate of return affects other items in the financial statement. The information is based on numbers taken from the corporate financial statement of a real company from its 2002 fiscal year. Company X’s pension assets lost $512 million in value during its fiscal year, yet Company X still reported pension income of $90 million. This apparent inconsistency is possible because the expected return on assets is used in place of actual returns to calculate net periodic pension cost. The second column of table 5 shows the effect of changing the expected long-term rate of return on pension assets from 9.8 percent to 8.5 percent. The figures affected by this change are in bold text. All of the changes caused by the change in the expected rate of return are related to measurements of Company X’s pension expense and measures of overall profitability. The change has no impact at all on measures of pension assets and liabilities. An explanation of the pension elements in table 5 follows in table 6. A company’s total operating costs include its labor costs, which include the net periodic pension cost. Therefore the net periodic pension cost is factored into the calculation of total operating costs, which affects operating profit, consolidated profit before taxes, the calculation of tax to be paid, and net profit (Items Y—CC in table 5). Changing the expected rate of return on plan assets from 9.8 percent to 8.5 percent has the following effects: In the components of net periodic cost of defined benefit plans, the expected return on plan assets (Item S) falls from $783 million to $679 million. This increases the total periodic pension cost by $114 million, which is enough to turn Company X’s periodic pension income of $90 million into a cost of $14 million. The increase in net periodic pension cost of $114 million increases the companies total operating costs (Item Y) and decreases the operating profit (Item Z) and consolidated profit before taxes (Item AA) by the same amount. Because taxable income is reduced, Company X pays less in corporate income tax (Item BB). Last, net profits (Item CC) decline from $802 million to $720 million, which for Company X’s shareholders of approximately 344 million shares of common stock would have meant a drop of about 24 cents in earnings per share (Item DD). In addition to those named above, Joseph Applebaum, Kenneth Bombara, Richard Burkard, David Eisenstadt, Elizabeth Fan, Michael Maslowski, Scott McNulty, Stan Stenerson, Roger Thomas, and Shana Wallace made important contributions to this report.
Information about the financial condition of defined benefit pension plans is provided in two sources: regulatory reports to the government and corporate financial statements. The two sources can often appear to provide contradictory information. For example, when pension asset values declined for most large companies between 2000 and 2002, these companies all continued to report positive returns on pension assets in their financial statement calculations of pension expense. This apparent inconsistency, coupled with disclosures about corporate accounting scandals and news of failing pension plans, has raised questions about the accuracy and transparency of available information about pension plans. GAO was asked to explain and describe (1) key differences between the two publicly available sources of information; (2) the limitations of information about the financial condition of defined benefit plans from these two sources; and (3) recent or proposed changes to pension reporting, including selected approaches to pension reporting used in other countries. Information about defined benefit pension plans in regulatory reports and pension information in corporate financial statements serve different purposes and provide different information. The regulatory report focuses, in part, on the funding needs of each pension plan. In contrast, corporate financial statements show the aggregate effect of all of a company's pension plans on its overall financial position and performance. The two sources may also differ in the rates assumed for investment returns on pension assets and in how these rates are used. As a result of these differences, the information available from the two sources can appear to be inconsistent or contradictory. Both sources of information have limitations in the extent to which they meet certain needs of their users. Under current reporting requirements, regulatory reports are not timely and do not provide information about whether benefits would all be paid were the plan to be terminated. Financial statements can supplement regulatory report data because they are timelier and provide insights into the probability of a company meeting its future pension obligations. However, through December 2003, financial statements have lacked two disclosures important to investors--allocation of pension assets and estimates of future contributions to plans. There is also debate about whether current methods for calculating pension expense accurately represent the effect of pension plans on a company's operations. Several changes have been made or proposed to provide further information. In July 2003, the administration called for public disclosure of more information about the sufficiency of a plan's assets. However, no further steps have yet been taken. For financial statements, the Financial Accounting Standards Board issued a revised standard in December 2003 requiring enhanced pension disclosures, such as pension asset allocation and expected contributions to plans. Internationally, accounting standards boards have considered proposals to change the methodology for calculating pension expense. We have previously recommended changes to improve the transparency of plan financial information, but other challenges remain. Plan participants and regulators continue to need more timely information, including measures of plan funding in the event of plan termination.
In an international context, the Treasury Department is the United States’ counterpart to other nations’ ministries of finance. The department’s responsibilities, among other things, include safeguarding the U.S. financial system from abuse by money launderers, terrorists, and other criminals. Over the years, in carrying out this responsibility, the department has established relationships with finance ministries, central banks, and other financial institutions in nations around the world as well as with multilateral organizations such as FATF, the FATF-style regional bodies, the International Monetary Fund (IMF), and the World Bank. FATF is an intergovernmental entity whose purpose is to establish international standards and to develop and promote policies for combating money laundering and terrorist financing. At its formation in 1989 by the United States and other industrialized nations, FATF’s original focus was to establish anti-money-laundering standards and monitor the progress of nations in meeting the standards. In 1990, FATF issued its “Forty Recommendations on Money Laundering” to promote the adoption and implementation of anti-money-laundering measures. For instance, the recommendations encouraged nations to enact legislation criminalizing money laundering and requiring financial institutions to report suspicious transactions. Following the events of September 11, 2001, FATF expanded its role to combat terrorist financing. Specifically, in October 2001, FATF adopted “Eight Special Recommendations on Terrorist Financing.” Among other actions, these recommendations committed members to criminalize the financing of terrorism and to freeze and confiscate terrorist assets. In October 2004, FATF published a ninth special recommendation on terrorist financing to target cross-border movements of currency and monetary instruments (“cash couriers”). Collectively, FATF’s “40 plus 9” recommendations are widely recognized as the international standards for combating money laundering and terrorist financing (see app. II). In monitoring nations’ progress in implementing the recommendations, FATF collaborates with other multilateral organizations, particularly the FATF-style regional bodies that represent nations in seven geographic areas. These regional groups are to help nations in the region to implement the international standards developed by FATF. Also, these standards have been recognized and endorsed by the World Bank and IMF for use in conducting evaluations and assessments of nations’ progress in implementing measures to counter money laundering and terrorist financing. To be compliant with FATF recommendations, a nation must, among other measures, establish an effective FIU. The United States’ FIU is FinCEN, which was administratively established in 1990 as a Treasury Department component. FinCEN was 1 of the 14 charter members of the Egmont Group, which was formed in 1995 to enhance information sharing among FIUs (see app. III). In 2001, section 361 of the USA PATRIOT Act established FinCEN as a statutory bureau in the Treasury Department. Organizationally, FinCEN is part of Treasury’s Office of Terrorism and Financial Intelligence, which is the department’s policy and enforcement entity regarding terrorist financing, money laundering, financial crime, and sanctions issues. Treasury’s budget request for fiscal year 2007 included $91.3 million (and 352 full-time- equivalent personnel) to support FinCEN’s mission of safeguarding the financial system from abuses of money laundering, terrorist financing, and other financial crime. FinCEN carries out this broad mission by, among other means, administering the Bank Secrecy Act (BSA) and networking with domestic regulatory, law enforcement, and intelligence agencies as well as with foreign counterparts. Section 330 of the USA PATRIOT Act expresses the sense of the Congress that the President should direct the Secretary of State, the Attorney General, or the Secretary of the Treasury to enter into negotiations with foreign jurisdictions to facilitate cooperative efforts to combat money laundering and terrorist financing. State Department, Justice Department, and Federal Reserve Board officials told us that the Treasury Department plays a lead role in addressing these efforts. According to Treasury Department officials, the U.S. interagency community has been acting to accomplish the goals articulated in section 330 through its interactions with FATF and the FATF-style regional bodies to ensure global compliance with international standards for combating money laundering and terrorist financing. Treasury officials also told us that enactment of section 330 provided a welcomed congressional endorsement of long- standing U.S. government policy to actively engage and negotiate with foreign jurisdictions through the medium of FATF and the related FATF- style regional bodies. Further, in conjunction with foreign negotiations, Treasury considers another provision of the USA PATRIOT Act—section 311—to be a useful mechanism for helping to promote compliance with standards. “the President should direct the Secretary of State, the Attorney General, or the Secretary of the Treasury, as appropriate, to seek to enter into and further cooperative efforts, voluntary exchanges, the use of letters rogatory, mutual legal assistance treaties, and international agreements to (1) ensure that foreign banks and other financial institutions maintain adequate records of transaction and account information relating to any foreign terrorist organization (as designated under section 219 of the Immigration and Nationality Act), any person who is a member or representative of any such organization, or any person engaged in money laundering or financial or other crimes; and (2) establish a mechanism whereby such records may be made available to United States law enforcement officials and domestic financial institution supervisors, when appropriate.” Section 330 does not constitute an express mandate—that is, section 330 does not impose an affirmative obligation on any agency or official to enter into negotiations. Nonetheless, the language of section 330 does suggest that efforts should be undertaken to engage in appropriate negotiations. State Department, Justice Department, and Federal Reserve Board officials told us that the lead role regarding the efforts encouraged by section 330 of the USA PATRIOT Act is held by the Treasury Department. According to the Treasury Department, the U.S. interagency community is fulfilling section 330 by actively engaging and negotiating with foreign jurisdictions through the medium of FATF and the related FATF-style regional bodies. The U.S. delegation to FATF, which is headed by the Deputy Assistant Secretary of the Treasury’s Office of Terrorist Finance and Financial Crime, includes representatives of the Departments of Homeland Security, Justice, and State; the federal financial regulators; and the National Security Council. Regarding efforts encouraged by section 330, Treasury’s Office of Terrorist Finance and Financial Crime said that the United States—working through FATF, the FATF-style regional bodies, the International Monetary Fund, and the World Bank—has led efforts to develop a global system to ensure that all countries adopt and are assessed against international standards for protecting financial systems and jurisdictions from money laundering and terrorist financing. As mentioned previously, these international standards consist of the FATF “Forty Recommendations on Money Laundering” and “Nine Special Recommendations on Terrorist Financing” (see app. II). Treasury testimony at a congressional hearing in July 2005 before the Senate Committee on Banking, Housing, and Urban Affairs also cited the benefits of international standard-setting bodies. Regarding U.S. efforts and participation in these bodies, the Treasury Under Secretary’s prepared statement included the following points:“The Financial Action Task Force (FATF) sets the global standards for anti-money laundering and counter terrorist financing, and it is also through this venue that we promote results. Treasury, along with our counterparts at State, Justice, and Homeland Security, has taken an active role in this 33-member body which articulates international standards in the form of recommendations, guidelines, and best practices to aid countries in developing their own specific anti-money laundering and counter-terrorist financing laws and regulations. … The success and force of FATF lie not only in the mutual evaluation process to which it holds its own members, but also in the emergence of FATF- style regional bodies … that agree to adopt FATF standards and model themselves accordingly on a regional level.” “Hawala, a relationship-based system of money remittances, plays a prominent role in the financial systems of the Middle East. … Internationally, Treasury leadership in the FATF has brought the issue of hawala to the forefront, resulting in implementation of FATF Special Recommendation VI, which requires all FATF countries to ensure that individuals and entities providing money transmission services must be licensed and registered, and subjected to the international standards set out by FATF.” “As governments apply stricter oversight and controls to banks, wire transmitters, and other traditional methods of moving money, we are witnessing terrorists and criminals resorting to bulk cash smuggling. FATF Special Recommendation IX was issued in late 2004 to address this problem and it calls upon countries to monitor cross-border transportation of currency and to make sanctions available against those who make false declarations or disclosures in this regard. This recommendation has already prompted changes in legislation abroad.” Further, on July 29, 2005, the United Nations Security Council unanimously adopted a U.S.-sponsored resolution (Resolution 1617) that, among other matters, “strongly urges” all member states to “implement the comprehensive, international standards” embodied in the FATF 40 plus 9 recommendations. Subsequently, at its most recent plenary meeting (October 12 to 14, 2005), FATF noted that “formal endorsement of the FATF standards by the U.N. Security Council is a major step toward effective implementation of the Recommendations throughout the world.” Regarding the U.S. government’s continuing efforts to actively engage and negotiate with foreign jurisdictions as encouraged by section 330, Treasury’s Office of Terrorist Finance and Financial Crime said that outreach to the international community to enhance global best practices to combat money laundering and terrorist financing involves various challenges. These challenges include ensuring that the international standards are current in reference to emerging trends and technology and are balanced and flexible enough to be relevant and applicable to all countries and situations, as well as ensuring that evaluations or assessments of countries are conducted on a consistent basis and manner. According to Treasury’s Office of Terrorist Finance and Financial Crime, interagency efforts to work through FATF and the FATF-style regional bodies to help ensure global compliance with international standards for combating money laundering and terrorist financing is a long-standing policy of the U.S. government—a policy that has had strong support from the White House. In further elaboration, Treasury officials said that because working through FATF and the FATF-style regional bodies is a long-standing policy, no specific guidance was needed from the President or the White House to implement section 330. That is, Treasury was already seeking to accomplish the goals articulated in section 330. The officials commented that passage of section 330 did not cause Treasury or the interagency community to alter the objectives of ongoing or planned negotiations. In sum, the Treasury officials stressed that enactment of section 330 provided a welcomed congressional endorsement of long- standing U.S. government policy and also provided a stimulus for continued efforts in negotiating with foreign jurisdictions. As an incentive or pressure mechanism that can be used in conjunction with foreign negotiations, Treasury considers section 311 of the USA PATRIOT Act to be particularly relevant for helping to ensure global compliance with international standards for combating money laundering and terrorist financing. Section 311 authorizes the Secretary of the Treasury—in consultation with the Secretary of State and the Attorney General and with consideration of multiple factors—to find that reasonable grounds exist for concluding that a foreign jurisdiction, a financial institution, a class of transactions, or a type of account is of “primary money laundering concern.” Such a designation is a precursor or condition precedent for taking one or more special measures. For instance, following a designation and with additional consultation and consideration of specific factors, the Secretary of the Treasury may require U.S. financial institutions to take certain “special measures” with respect to applicable jurisdictions, institutions, accounts, or transactions. The special measures can range from enhanced recordkeeping or reporting obligations to a requirement to terminate and not open correspondent accounts involving the primary money laundering concern. Since the USA PATRIOT Act was signed into law in October 2001, section 311 designations have been announced for three foreign jurisdictions (Ukraine, Nauru, and Burma). Treasury’s first use of section 311 authority was in December 2002, with the designation of Ukraine and Nauru as being of primary money laundering concern. A third jurisdiction, Burma, was designated in November 2003. In addition to foreign jurisdiction designations, Treasury has also used section 311 authority to designate certain foreign financial institutions as being of primary money laundering concern. Examples include Myanmar Mayflower Bank and Asia Wealth Bank (November 2003), Commercial Bank of Syria (May 2004), First Merchant Bank of the “Turkish Republic of Northern Cyprus” and Infobank of Belarus (August 2004), and Multibanka and VEF Banka of Latvia (April 2005). More recently, in September 2005, Treasury designated Banco Delta Asia SARL, which is located in the Macau Special Administrative Region, China. In discussing section 311 with us, Treasury’s Office of Terrorist Finance and Financial Crime officials characterized designations—even without subsequent special measures being taken—as a very useful tool for bringing pressure on countries and institutions to meet international standards. For example, after being designated by Treasury in December 2002, Ukraine subsequently took steps to address deficiencies by amending its anti-money-laundering law, its banking and financial services laws, and its criminal code. Accordingly, Treasury revoked its designation in April 2003. Since 1995, the number of FIUs recognized by the Egmont Group has increased more than sevenfold. Attributable reasons include FATF-related efforts, as well as those of the federal interagency community. A particular focus of FinCEN—working with federal interagency partners—has been to provide training and technical assistance to help create and enhance the capabilities of FIUs. Given the significant growth in the number of FIUs recognized by the Egmont Group, which now totals 101, more attention is being focused on improving the capabilities of existing units, especially in reference to combating terrorist financing—an operational task now included in the Egmont Group’s definition of an FIU. Generally, FIUs are evaluated as part of an overall methodology designed to assess a country’s compliance with the international standards contained in the FATF 40 plus 9 recommendations for combating money laundering and terrorist financing. According to FinCEN, its efforts to improve the capabilities of foreign FIUs must be achieved through cooperation, collaboration, and consensus—given that the Egmont Group is responsible for dealing with its members’ shortcomings or noncompliance with standards. Over the past decade, the number of FIUs recognized by the Egmont Group increased more than sevenfold, from 14 in 1995 to 101 as of July 2005 (see fig. 1). A goal of the Egmont Group is to provide a forum for FIUs to improve support to their respective national programs for combating money laundering and terrorist financing. Egmont Group membership is not automatic for new or nascent FIUs. Rather, the Egmont Group has a Legal Working Group responsible for assessing each FIU-candidate to ensure that the prospective member meets admission criteria. For instance, the assessment criteria are used to determine whether the FIU-candidate meets the Egmont definition of an FIU, has reached full operational status, and is legally capable and willing to cooperate on the basis of Egmont principles (see app. III). Also, among other responsibilities, an Egmont Group member that sponsors or mentors the FIU-candidate is expected to have first-hand experience (including an on-site visit) to confirm the operational status of the candidate FIU. The significant growth in the number of FIUs is attributable to various reasons, including FATF-related efforts to establish international standards and promote policies for combating money laundering and terrorist financing. For example, FATF recommendation number 26 (see app. II) specifies that countries should establish an FIU that serves as a national center for receiving (and, as permitted, requesting), analyzing, and disseminating suspicious transaction reports and other information regarding potential money laundering or terrorist financing. Moreover, a contributing role has been played by the Egmont Group, which has an Outreach Working Group to identify candidate countries for membership and help them meet international standards. Further, the growth in the number of FIUs is attributable partly to federal interagency efforts, including training and technical support provided by FinCEN and Treasury’s Office of Technical Assistance, as well as funding provided by the State Department. As a member of the Egmont Group since 1995, FinCEN in particular has focused its global efforts on assisting jurisdictions establish new FIUs and improving existing units. For instance, in helping to establish new FIUs, FinCEN’s assistance has included a variety of activities, such as performing country assessments, advising or commenting on draft FIU legislation, providing seminars on the combating of money laundering, conducting training courses for FIU personnel, and furnishing technical advice on computer systems. According to FinCEN, much of its work now involves strengthening existing FIUs. In this regard, FinCEN’s activities include conducting personnel exchanges (from foreign FIU to FinCEN and vice versa) and participating in operational workshops and other training initiatives. Also, FinCEN noted that much of its assistance involves regional or multilateral efforts, such as working closely with the Egmont Group of FIUs, the United Nations, and multilateral development banks. As an example of a recent FIU-related activity, FinCEN reported that it sent a four-person team to Saudi Arabia in the first quarter of fiscal year 2006 to conduct an on-site assessment and provide various presentations (covering, for example, information exchange issues) to employees of the Saudi FIU. In addition, FinCEN’s activities for fiscal year 2005 included providing training (either abroad or at FinCEN) to FIU representatives from various nations, such as Argentina, Brazil, China, Guatemala, South Korea, Paraguay, and Sri Lanka. For fiscal year 2004, FinCEN reported that it joined with the United Arab Emirates to host representatives from Afghanistan, Bangladesh, Maldives, Pakistan, and Sri Lanka on developing FIUs. Also, FinCEN’s reported activities for fiscal year 2003 include conducting personnel exchanges with Egmont Group allies from several Baltic nations (i.e., Estonia, Latvia, and Lithuania), Bolivia, Turkey, South Korea, Ukraine, and Russia; co-hosting regional training workshops in Malaysia and Mauritius; and sponsoring Bahrain, Mauritius, and South Africa as new members into the Egmont Group—with the latter two countries representing Africa’s first representatives in the group. Similarly, according to the State Department, recent activities of Treasury’s Office of Technical Assistance include (1) providing training and technical assistance to FIUs in Paraguay and Peru, (2) helping the Senegal FIU achieve operational status, and (3) working with Ukraine to streamline its national FIU. Generally, U.S. government assistance in creating and strengthening FIUs can be viewed as being one strategic element among several designed to enhance the capacity of global partners. For instance, the training and technical assistance that U.S. agencies provide to vulnerable countries are intended to help the countries develop five elements that, according to the State Department, are needed for an effective anti-money-laundering and counter-terrorism-financing regimes—a legal framework, a financial regulatory system, law enforcement capabilities, judicial and prosecutorial processes, and an appropriate FIU. However, despite the formation of an interagency coordination entity—the Terrorist Financing Working Group—U.S. efforts to coordinate the delivery of training and technical assistance lack an integrated strategic plan, as we recently reported. Among other matters, our October 2005 report noted disagreements between the State and Treasury departments on procedures and practices for delivering training and technical assistance as well as disagreements regarding interagency leadership and coordination responsibilities. The report recommended that the Secretary of State and the Secretary of the Treasury develop an integrated strategic plan and enter into an agreement specifying the roles of each department, bureau, and office with respect to conducting needs assessments and delivering training and technical assistance. In March 2006, the State Department provided the Congress a written statement (as required under 31 U.S Code § 720) regarding action taken on the recommendation. State commented that several steps were being taken to enhance interagency coordination. The written statement noted, for example, that the National Security Council and the departments of State, Justice, the Treasury, and Homeland Security were reviewing the work of the Terrorist Financing Working Group in light of recent years’ experience, with a view to making any appropriate updates and adjustments to enhance its effectiveness. Also, during our review, State Department officials noted one area where they would like to augment U.S. assistance to nascent FIUs. This area involves ensuring that nascent FIUs have appropriate information technology (hardware and software). The officials emphasized that such technology is essential to appropriately functioning FIUs. In this regard, the officials said that the State Department and FinCEN are engaged in ongoing discussions on how to augment such assistance. Further regarding future directions, FinCEN’s Deputy Director (who also chairs the Egmont Committee) commented that there will be continuing efforts to establish new FIUs, particularly in priority regions (such as the Middle East and Central Asia) critical to combating money laundering and terrorist financing. Moreover, given the dynamic growth in the Egmont Group’s membership, the Deputy Director noted that the Egmont Committee will be giving more attention to improving the capabilities or effectiveness of existing FIUs. Generally, FIUs are evaluated as part of an overall methodology designed to assess a country’s compliance with the international standards contained in the FATF 40 plus 9 recommendations for combating money laundering and terrorist financing (see app. II). “A key element in the fight against money laundering and the financing of terrorism is the need for countries to be monitored and evaluated, with respect to these international standards. The mutual evaluations conducted by the FATF and the FATF-style regional bodies, as well as assessments conducted by the IMF and the World Bank, are a vital mechanism for ensuring that the FATF Recommendations are effectively implemented by all countries.” Our research and inquiries identified one published study that presented comparative or multicountry results based on mutual evaluations of nations’ compliance with the FATF recommendations. The study—Twelve- Month Pilot Program of Anti-Money-Laundering and Combating the Financing of Terrorism (AML/CFT) Assessments–Joint Report on the Review of the Pilot Program, March 10, 2004—was prepared jointly by IMF and the World Bank. The study summarized the results of the mutual evaluations of 41 jurisdictions, conducted during the 12-month period that ended in October 2003. The assessments used a common methodology adopted by FATF and endorsed by the Executive Boards of IMF and the World Bank. Of the 41 assessments, 33 were conducted by IMF or the World Bank, and 8 were conducted by FATF and the FATF-style regional bodies. In their March 2004 joint report, IMF and the World Bank presented assessment findings for the 41 jurisdictions in a summary format, rather than associating compliance levels or deficiencies with any individual country. For instance, the report made the following general observations: “Overall compliance with the FATF 40+8 Recommendations is uneven across jurisdictions. Many jurisdictions show a high level of compliance with the original FATF 40 Recommendations. The most prevalent deficiency among all assessments is weaker compliance with the Eight Special Recommendations on terrorist financing.” “There is generally a higher level of compliance in high and middle income countries than in low income countries. Higher income countries typically have well developed AML/CFT regimes but with specific gaps, especially concerning the Eight Special Recommendations on terrorist financing.” The joint report did not separately present or discuss assessment findings related to the functioning or effectiveness of FIUs. However, two of the 13 main weaknesses identified are directly related to FIUs. These two weaknesses (see table 1) are topic 12 (no requirement to report promptly to the FIU if financial institutions suspect that funds stem from criminal activity) and topic 13 (poor international exchange of information relating to suspicious transactions and to persons or corporations involved). The joint report noted that assessments using the common methodology are increasingly used as a diagnostic tool to identify technical assistance needs, including assistance for creating and strengthening FIUs. Although not published, an overview of more recent FIU-related assessment findings was presented on July 1, 2005, in Washington, D.C., at the annual plenary meeting of the Egmont Group. Specifically, an IMF representative presented summary information covering 29 countries, whose names were not disclosed. The IMF representative noted that the information was derived from the results of mutual evaluations or assessments conducted during 2003 to 2005 using the common methodology endorsed by FATF, IMF, and the World Bank. According to the presentation, the findings of the assessments indicated that many of the FIUs had shortcomings, such as a shortage of staff (one-third of the total), a lack of political independence (one-fourth), and legal obstacles to international cooperation (one-third). Other shortcomings cited were (scope not quantified) lack of clear legal framework, lack of strategic analysis tools, lack of access to appropriate information and databases, excessive transmission of information to law enforcement agencies, lack of guidelines on the identification of suspicious behavior, lack of feedback, lack of powers to sanction failure to report, and legal obstacles to the transmission of suspicious transaction reports. In its January 2006 response to our inquiry, the State Department said that the U.S. government and other major donors generally are well informed about the existence of FIUs (and their capabilities and deficiencies) in those jurisdictions in which the donors wish to participate. State commented that while mutual evaluations are but one source of information and can be outdated before being discussed at meetings of the FATF-style regional bodies, these evaluations are useful in identifying deficiencies and prompting corrective action by the respective jurisdiction. According to FinCEN’s Deputy Director (and chair of the Egmont Committee), the Egmont Group is responsible for dealing with its members’ shortcomings or noncompliance with standards. That is, even though influential, FinCEN has only one vote within the 101-member Egmont Group. Therefore, FinCEN’s efforts to improve the capabilities of foreign FIUs must be achieved through cooperation, collaboration, and consensus. “A summary of each report will be published on the FATF website and FATF members have agreed in principle to make public the full mutual evaluation reports (with the ultimate decision being left to each FATF member for its own report). The FATF intends to provide comprehensive information on its members’ actions in combating money laundering and terrorist financing.” The Deputy Director also commented that the most significant functional change for the Egmont Group in recent years was expansion of the definition of an FIU in response to the terrorist attacks of September 11. Shortly thereafter, at an October 2001 special meeting of the Egmont Group in Washington, D.C., the members expressed a sense that the group’s operational functions should expand beyond money laundering to address terrorist financing. Later, at the Egmont Group’s 12th plenary meeting—held during June 21 to 25, 2004, and hosted in Guernsey, Channel Islands—the definition of “financial intelligence unit” was amended to include a reference to terrorist financing. This new definition is reflected in the Statement of Purpose of the Egmont Group of Financial Intelligence Units, which resulted from the Guernsey plenary meeting. Thus, combating terrorist financing now is included in the definition of tasks an FIU is required to perform. According to the Deputy Director, existing FIUs have a grace period of at least 2 years to become compliant with the new definition. He noted that throughout the history of the Egmont Group, no member has ever been excluded from continuing participation. The Deputy Director added, however, that plenary meetings in recent years have begun to address the issue of noncompliance. For example, according to documentation of Egmont Group meetings: The 11th Egmont Group plenary, held July 21 to 25, 2003, in Sydney, Australia, marked the “first attempt to establish a procedure for dealing with members that may no longer meet Egmont standards.” At the 12th Egmont plenary session, held June 21 to 25, 2004, in Guernsey, Channel Islands, “a paper was drafted which outlines the procedures to address those Egmont members that may no longer meet the established definitions and standards of the Egmont Group, or that fail to exchange information.” The Deputy Director said that a paper on noncompliance was also presented at the most recent Egmont plenary meeting, held June 30 to July 1, 2005, in Washington, D.C. He added that the issue will be revisited at the 2006 plenary meeting in Cyprus. He explained that dealing with noncompliance will be a difficult issue and likely will reflect a go-slow approach. For instance, the Deputy Director opined that before administrative action (such as exclusion) is taken, the noncompliant member probably would be offered ameliorating assistance over an extended period of time. Since the events of September 11, FinCEN’s most important operational priority has been to provide counterterrorism support to the law enforcement and intelligence community. In January 2006, to enhance its support role, FinCEN assigned an analyst to the FBI’s Terrorist Financing Operations Section. Also, among other actions to maximize performance as a global partner in combating money laundering and terrorist financing, FinCEN is modernizing the Egmont Secure Web—the Internet-based system developed and maintained by FinCEN and used by FIUs worldwide to exchange information. Further, FinCEN is allocating additional staff resources to facilitate responding to foreign requests for assistance and is developing a new case management system. However, FinCEN’s most recent customer satisfaction survey of FIUs had limited coverage and a very low response rate, partly because there was no follow-up with nonrespondents. Future surveys would need to be more inclusive and incorporate better survey development and administration practices, such as follow-up efforts to achieve higher response rates, if the surveys are to serve as a useful management information tool for monitoring and enhancing performance. FinCEN has recognized that it faces various challenges, such as redirecting its efforts to more complex cases, some of which inevitably have international linkages. Another important challenge is to support the nation’s focus on detecting and preventing terrorist financing, which also can involve international linkages. “the FinCEN analytical product we provide to our global counterparts when asked for information. Today, we are primarily providing the results of a data check. We think we owe our colleagues more. … e will also be making more requests for information and analysis from our partners—particularly when the issue involves terrorist financing or money laundering.” “Like the rest of America, the Financial Crimes Enforcement Network is still adapting to changes triggered by the events of 9/11. These changes include … supporting the Department of the Treasury’s new focus on detecting and preventing terrorist financing. … While the Financial Crimes Enforcement Network has historically developed the information, analytical processes, and tools required to detect money laundering, we need to develop additional tools—and to gain access to additional data, including classified data—in order to better detect terrorist financing.” “FinCEN must upgrade the quality of its analysis related to terrorist financing and money laundering. FinCEN has begun a major initiative to enhance the ability of FinCEN analysts to consider all information sources, including, as appropriate, classified data, when analyzing money laundering and terrorist financing methods. To be successful, this will require an overall upgrade to the security environment, significant investments in training and building analytical skills relating to terrorist financing, upgrade of the analytical software related to text mining, enhanced availability of classified sources … and an increase in overall personnel security classifications to allow the integration of all information sources.” A primary data source used by FinCEN analysts is the government’s database of BSA-related forms, including suspicious activity reports (SARs) filed by financial institutions. An integral part of FinCEN’s counterterrorism strategy involves reviewing and referring all SARs related to terrorist financing to law enforcement and intelligence agencies. For instance, as of September 2005, FinCEN reported that it had proactively developed and referred a total of 526 potential terrorist financing leads to appropriate agencies, such as the FBI’s Terrorist Financing Operations Section and Joint Terrorism Task Forces. In 2004, the FBI contacted the Director of FinCEN and requested bulk access to BSA reports for ingestion into the FBI’s system, the Investigative Data Warehouse (IDW). The FinCEN Director recognized the benefits of having the data available in this format and approved the request. According to the FBI: “ is a centralized, web-enabled, closed system repository for intelligence and investigative data. This system, maintained by the FBI, allows appropriately trained and authorized personnel throughout the country to query for information of relevance to investigative and intelligence matters. In addition to BSA data provided by FinCEN, IDW includes information contained in myriad other law enforcement and intelligence community databases. The benefits of IDW include the ability to efficiently and effectively access multiple databases in a single query. As a result of the development of this robust information technology, a review of data that might have previously taken days or months now takes only minutes or seconds.” The FBI noted that FinCEN provides the IDW with regular updates of the BSA data. Also, the FBI told us that it has not had any discussions with FinCEN regarding ways to enhance FinCEN’s link-analysis capability. Generally, link analysis involves use of data mining and other computerized techniques to identify relationships across organizations, people, places, events, etc. Rather, the FBI noted that it requested FinCEN to assign an analyst to the FBI’s Terrorist Financing Operations Section. Such an assignment, the FBI explained, would provide FinCEN access to additional data sources, which would be useful to FinCEN in performing its various roles. FinCEN told us that it accepted this offer almost immediately, and, subsequently, in January 2006, the designated FinCEN analyst reported to the FBI to begin initial training (2 weeks) with Terrorist Financing Operations Section personnel. More recently, in March 2006, in providing us feedback on this arrangement and other interactions, the FBI commented that it highly values its strong partnership with FinCEN. In further reference to analyzing SARs and developing counterterrorism- related leads, we note that FinCEN developed and transmitted a total of four referrals to FIUs during the past 4 fiscal years, 2002 through 2005. Of these four referrals, according to FinCEN, two were sent to Spain’s FIU, one was sent to the United Kingdom’s FIU, and one was sent to both Canada’s FIU and the United Kingdom’s FIU. In addition to these proactive referrals, FinCEN emphasized that it regularly interacts with foreign FIUs to explore opportunities for working on issues of mutual interest. These efforts, according to FinCEN, essentially achieve the same goals and results as proactive referrals—and perhaps in a more tailored and effective manner. Facilitating the cross-border exchange of information is a core function of FIUs. FinCEN plays a key role in fostering the secure exchange of information among FIUs, given that FinCEN operates and maintains the Egmont Secure Web. An Internet-based system, the Egmont Secure Web is used by FIUs primarily for its encrypted e-mail capability to exchange sensitive case information. In 1997, FinCEN initially launched the Egmont Secure Web, and its development was funded solely by the Treasury Forfeiture Fund. Operationally, according to FinCEN, the Egmont Secure Web is of paramount importance to FIUs. For instance, the Egmont Group’s guidelines—Best Practices for the Exchange of Information between Financial Intelligence Units—state that, where appropriate, FIUs should use the Egmont Secure Web, which permits secure online information sharing among members. According to FinCEN, the system has encouraged unprecedented cooperation among FIUs because of security, ease of use, and quick response time. Also, FinCEN officials explained that the Egmont Secure Web provides online access to many reference materials, such as official Egmont procedural documents, FIU contact information, case examples, recently noted trends, and minutes from all Egmont meetings. A large majority (96) of the Egmont Group’s 101 members are connected to the Egmont Secure Web. As of February 2006, 67 of the 96 FIUs each had one Egmont Secure Web user account, and 29 other FIUs each had two or more user accounts (see table 2). With 49 user accounts, FinCEN’s total is nearly three times that of Belgium’s FIU, which has the second largest number of user accounts (18). For fiscal year 2004, FinCEN reported that it supported 844 law enforcement cases via information exchanges with foreign jurisdictions and that an estimated 98 percent of FinCEN’s responses to these jurisdictions went through the Egmont Secure Web. FinCEN is in the process of modernizing the system by acquiring upgraded hardware and software. FinCEN officials estimated that the upgrade will be completed by mid-2006 and cost approximately $631,000. Further, the officials noted the following information: The U.S. government is the owner of the system and all other users are stakeholders. In effect, FinCEN is providing a service to a group (i.e., the Egmont Group)—of which, FinCEN itself is a member. The Egmont Secure Web meets or exceeds the requirements for information systems that handle sensitive but unclassified information. The issuance of a digital certificate gives some assurance that users have met security requirements, but the burden is on the respective FIU to be responsible. As a further safeguard, the officials noted that the Egmont Secure Web does not give foreign FIUs access to FinCEN’s internal systems—for example, the FIUs have no direct access to BSA data. Important tools for monitoring and improving performance of any organization include implementing an effective management information system and obtaining feedback from customers. Such tools are particularly relevant for FinCEN, a networking organization that has a significant role and responsibilities in combating international financial crime. In response to our inquiry about what trends are reflected in data regarding the timeliness of FinCEN’s responses to foreign FIU requests for assistance, FinCEN officials said that their management information system does not lend itself easily to the identification of trends. The officials noted, however, that FinCEN was developing a new case management system to make statistical information more readily available. According to FinCEN officials, full implementation of the new system is scheduled for the last quarter of fiscal year 2008. The officials told us that as of March 2006, no decision had been reached on the new system’s hardware or software platform. However, the officials noted that in developing the new system, FinCEN is coordinating with Treasury’s Enterprise Architecture Office and also is complying with applicable guidance from the Office of Management and Budget. Available case-management statistics show that FinCEN receives more requests from foreign FIUs than it submits to these counterparts. As table 3 indicates, the number of incoming requests to FinCEN has been about twice the number of outgoing requests in recent fiscal years. In managing and processing incoming requests, FinCEN’s policy is to give priority to terrorism-related requests and other “expedite” requests, such as those involving imminent law enforcement action or other extenuating time-sensitive circumstances. FinCEN officials said that responses are prepared to meet these deadlines. Otherwise, the officials said that requests from foreign FIUs are to be handled on a first-come, first-served basis. Generally, the officials noted that the timeliness of FinCEN in responding to requests from foreign FIUs can depend on a variety of factors, such as the volume of requests, the types and amount of information being requested, the number of subjects involved (e.g., persons and accounts), whether additional clarifications of the requests are needed, and even the extent of time zone differences between FinCEN and the foreign FIUs. According to FinCEN data, the average time for responding to foreign requests was 106 days in fiscal year 2002 and increased to 124 days in fiscal year 2004. The FinCEN officials attributed this increase to various reasons, including the growing number of FIUs and a loss of contract staff who handled the majority of the requests from foreign FIUs. More recently, FinCEN officials said that the average response time had decreased to 63 days for fiscal year 2005 (through July 25, 2005). To further improve response times, the officials indicated that FinCEN was (1) shifting additional employees to its Office of Global Support (within the Analysis and Liaison Division), which is responsible for processing requests from foreign FIUs, and (2) hiring contract staff to specifically handle FIU requests for information. For fiscal year 2005 (through July 25, 2005), a total of 561 requests were made to FinCEN by 75 foreign customers, primarily FIUs. Sixteen FIUs accounted for two-thirds of the total requests, as table 4 shows. One management priority of FinCEN is to periodically conduct customer satisfaction surveys. The purpose of such surveys is to “identify strengths and opportunities to improve services to external clients.” FinCEN contracted with an independent research organization to conduct the most recent survey, which was designed and implemented during August to October 2005. To obtain feedback on FinCEN’s support for investigative cases, one survey instrument was used for both domestic law enforcement customers (federal, state, and local) and international customers (FIUs). The survey instrument was designed to obtain feedback on various aspects of FinCEN’s services provided during fiscal year 2005, such as the ease of making requests, the timeliness of responses, and the value or usefulness of information provided. To facilitate distribution of the survey instrument, FinCEN provided the contractor with a list of 325 customers, a total consisting of both domestic and international customers. According to FinCEN, this total represented all customers who had requested assistance from FinCEN in fiscal year 2005 and for whom FinCEN had valid e-mail addresses. All 325 customers were invited via e-mail to participate in the Web-based survey. Of the 325 customers, 41 were FIUs. In answering our inquiry, FinCEN officials were unable to explain why all FIUs that requested assistance from FinCEN in fiscal year 2005 were not included in the survey. Subsequently, from the 325 domestic and international customers invited to participate in the survey, a total of 78 responses were collected, giving an overall response rate of 24 percent. Although not broken out separately in the contractor’s final report, the FIU-related response rate was much lower, with only 2 of the 41 FIUs responding. As a result of the low response rate from the FIUs, insufficient information was received to help FinCEN identify strengths and opportunities to improve services to external clients. FinCEN did receive feedback on the level of satisfaction for two FIUs, which is helpful; however, the experiences of the two FIUs cannot be interpreted as representing the experiences of other FIUs. Generally, in conducting a survey, various efforts to promote the highest possible response rate can be considered during both survey development and survey administration. During survey development, consideration can be given to individual and organization characteristics that may affect the prospective respondents’ level of cooperation in completing the survey. For instance, the prospective respondents may not want to be critical of the survey’s sponsor. Another factor that affects cooperation is the burden that completing the survey instrument imposes on prospective respondents in terms of their time and the level of effort required to understand the questions and formulate responses. Pretesting the survey instrument is a way to help evaluate whether the potentially adverse effects of these types of factors have been minimized. Further, during survey administration, follow-up efforts with prospective respondents can help to promote the highest possible response rate. Such follow-up efforts can include e-mail messages, letters, or telephone calls. FinCEN officials told us they were unaware why the response rate from FIUs was low or whether the survey included any follow-up efforts to obtain responses from additional FIUs. In perspective, periodic surveys of customers are not the only method used by FinCEN to obtain performance feedback. For instance, in responding to each request for assistance from FIUs, the practice of FinCEN is to include an accompanying form that solicits feedback regarding the timeliness of FinCEN’s response and the usefulness of the specific information provided. According to FinCEN officials, many of the feedback forms either are not returned or are returned with annotations indicating, for example, that the usefulness of the information provided by FinCEN may not be known until some future date. However, even if request-specific feedback is obtained, FinCEN officials recognize the benefits of conducting more comprehensive efforts, such as the periodic customer satisfaction surveys. This recognition, as mentioned previously, is reflected in FinCEN’s Strategic Plan. FinCEN plays a critically important role in international efforts to combat money laundering and terrorist financing. It has been a leader in the adoption and implementation of international money laundering countermeasures and supporting and advancing the Egmont Group’s principles and activities. A key part of FinCEN’s international role has been its efforts to respond to requests for information related to possible international financial crime. Yet, FinCEN’s method for obtaining performance feedback data from global partners is flawed. Relevant feedback data include whether FIUs find the information provided by FinCEN to be substantive, timely, and useful—or how information-sharing efforts could be improved. Without such data, FinCEN is not in the best position to help the international community combat financial crime. In its Strategic Plan, FinCEN recognizes the importance of periodically surveying its customers to “identify strengths and opportunities to improve services.” However, FinCEN’s most-recent customer satisfaction survey of its global partners had limited coverage—with less than one-half of all FIUs being invited to participate. Also, the response rate was very low, with no follow-up efforts directed specifically at nonresponding FIUs. In the future, FinCEN’s customer satisfaction surveys of FIUs need to be more inclusive and reflect higher response rates if the surveys are to serve as a useful management information tool for monitoring and enhancing performance. The importance of monitoring and improving performance by obtaining feedback from customers is highlighted by the new operational role of FIUs in combating terrorist financing—a role in which the sharing or exchanging of information can be especially time critical. We recommend that the Director of FinCEN take appropriate steps in developing and administering future customer satisfaction surveys to help ensure more comprehensive coverage of and higher response rates from FIUs. For example, such steps could include pretesting the survey instrument and following-up with nonresponding FIUs. We provided a draft of this report for comment to the departments of the Treasury, State, Homeland Security, and Justice, and the Federal Reserve Board. We received written responses from each agency. The Department of the Treasury responded that it supports our recommendation that the Director of FinCEN take appropriate steps in developing and administering future customer satisfaction surveys to help ensure more comprehensive coverage of and higher response rates from FIUs. The Department of the Treasury commented that it is committed to ensuring that customer surveys provide reliable performance feedback. The Department of State commented that our October 2005 report— Terrorist Financing: Better Strategic Planning Needed to Coordinate U.S. Efforts to Deliver Counter-Terrorism Financing Training and Technical Assistance Abroad (GAO-06-19)—was not relevant for discussion in this report. In our view, however, the October 2005 report provides relevant perspectives on interagency coordination and strategic planning, so we retained a brief discussion of it in this report. The Department of State also provided a technical comment regarding section 311 of the USA PATRIOT Act, which we incorporated where appropriate. The Department of Homeland Security and the Federal Reserve Board responded that they had no comments on this report. The Department of Justice provided technical comments only, which we incorporated in this report where appropriate. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to interested congressional committees and subcommittees. We will also make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report or wish to discuss the matter further, please contact me at (202) 512-8777 or stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other key contributors to this report were Danny Burton, Frederick Lyles, Natasha Ewing, Thomas Lombardi, and Evan Gilman. In response to a request from the Chairman, House Committee on the Judiciary, we reviewed the global or international-related efforts of the Department of the Treasury and the Financial Crimes Enforcement Network (FinCEN) to combat money laundering and terrorist financing. Section 330 of the USA PATRIOT Act expresses the sense of the Congress that the President should direct the Secretary of State, the Attorney General, or the Secretary of the Treasury, in consultation with the Board of Governors of the Federal Reserve, to seek to enter into negotiations with foreign jurisdictions that may be utilized by a foreign terrorist organization in order to further cooperative efforts to ensure that foreign banks and other financial institutions maintain adequate records of transactions and account information relating to any foreign terrorist organization or member thereof. The negotiators should also seek to establish a mechanism whereby those records would be made available to U.S. law enforcement officials and domestic financial institution supervisors, when appropriate. Section 361 of the USA PATRIOT Act established FinCEN as a statutory bureau in the Treasury Department and listed FinCEN’s various duties and powers, which include coordinating with foreign counterparts—that is, financial intelligence units (FIUs) in other countries. These units are specialized governmental agencies created to combat money laundering, terrorist financing, and other financial crimes. Each FIU is the respective nation’s central agency responsible for obtaining information (e.g., suspicious transaction reports) from financial institutions, processing or analyzing the information, and then disseminating it to appropriate authorities. Specifically, our review focused on the following questions regarding efforts under sections 330 and 361 of the USA PATRIOT Act to combat money laundering and terrorist financing: Under section 330 of the USA PATRIOT Act, how has the Department of the Treasury interacted or negotiated with foreign jurisdictions to promote cooperative efforts to combat money laundering and terrorist financing? Under section 361, how has FinCEN contributed to establishing FIUs in foreign countries and enhancing the capabilities of these units to combat money laundering and terrorist financing? What actions is FinCEN taking to maximize its performance as a global partner in combating money laundering and terrorist financing? Initially, in addressing the principal questions, we reviewed sections 330 and 361 of the USA PATRIOT Act and relevant legislative histories. Also, we reviewed information available on the Web sites of federal entities, including the departments of the Treasury (and FinCEN), Justice, State, and Homeland Security. Similarly, we reviewed information available on the Web sites of relevant multilateral or international bodies, such as (1) the Financial Action Task Force on Money Laundering (FATF), an intergovernmental entity whose purpose is to establish international standards and to develop and promote policies for combating money laundering and terrorist financing; (2) the various FATF-style regional bodies; (3) the International Monetary Fund; (4) the World Bank; and (5) the Egmont Group of FIUs. To obtain additional background and overview perspectives, we conducted a literature search to identify relevant reports, studies, articles, and other documents—including congressional hearing testimony—regarding U.S. and multilateral efforts to combat money laundering and terrorist financing. Regarding section 330 of the USA PATRIOT Act, to determine how the Department of the Treasury has interacted or negotiated with foreign jurisdictions to promote cooperative efforts to combat money laundering and terrorist financing, we interviewed responsible officials at and reviewed relevant documentation obtained from the departments of the Treasury, Justice, and State and the Federal Reserve Board. Also, because our preliminary inquiries indicated that efforts to accomplish the goals articulated under section 330 largely involve interactions with multilateral organizations—particularly FATF—we focused especially on the efforts of Treasury’s Office of Terrorist Finance and Financial Crime, which leads the U.S. delegation to FATF and is the department’s policy and enforcement entity regarding money laundering and terrorist financing. Further, because section 330 does not specify any consequences or penalties for noncooperative parties or countries, we determined the availability of incentive or pressure mechanisms that could be used in conjunction with negotiations. In this regard, on the basis of Treasury’s response to our inquiry, we identified federal actions taken under USA PATRIOT Act section 311, which authorizes the Secretary of the Treasury—in consultation with the Secretary of State and the Attorney General—to find that reasonable grounds exist for concluding that a foreign jurisdiction, a financial institution, a class of transactions, or a type of account is of “primary money laundering concern.” If such a finding is made, U.S. financial institutions could be required to take certain “special measures” against the applicable jurisdictions, institutions, accounts, or transactions. The special measures can range from enhanced record keeping or reporting obligations to a requirement to terminate correspondent banking relationships with the designated entity. In addressing this topic, we first obtained data on the annual growth in the number of FIUs over the past decade—from 1995, when the Egmont Group of FIUs was formed, to the present. Also, we obtained overview information on the history, purposes, and functioning of FIUs. For instance, the overview information—which was available on the Egmont Group’s Web site (www.egmontgroup.org) or was otherwise published— included the following: Statement of Purpose of the Egmont Group of Financial Intelligence Principles for Information Exchange Between Financial Intelligence Units for Money Laundering and Terrorism Financing Cases, Best Practices for the Exchange of Information between Financial International Monetary Fund and World Bank, Financial Intelligence Units—An Overview, 2004. In further reference to establishing FIUs and enhancing their capabilities, we obtained information on the efforts (e.g., training and technical support) of FinCEN and other federal contributors, such as Treasury’s Office of Technical Assistance and the State Department. In so doing, we interviewed responsible officials at and reviewed relevant documentation obtained from FinCEN, Treasury, and State. The federal officials we contacted included FinCEN’s Deputy Director, who chairs the Egmont Committee, which functions as the consultation and coordination mechanism for FIU heads and the Egmont Group’s five working groups (information technology, legal, operational, training, and outreach). The documentation we reviewed included FinCEN’s annual reports and strategic plans as well as the international narcotics control strategy reports released annually by the State Department’s Bureau for International Narcotics and Law Enforcement Affairs—reports that present information on FinCEN’s and other federal agencies’ efforts to create and improve FIUs. In identifying these federal efforts, we did not attempt to disaggregate or separately quantify contributions attributable to the respective federal agency. Rather, we made inquiries regarding any potential issues involving interagency coordination of federal efforts. Further regarding the capability of FIUs, we identified and reviewed available studies or reports. In particular, we reviewed a report prepared by the International Monetary Fund (IMF) and the World Bank that presented comparative or multicountry results based on mutual evaluations of nations’ compliance with the FATF recommendations. The study—Twelve-Month Pilot Program of Anti-Money-Laundering and Combating the Financing of Terrorism (AML/CFT) Assessments–Joint Report on the Review of the Pilot Program, March 10, 2004—summarized the results of the mutual evaluations of 41 jurisdictions, conducted during the 12-month period that ended in October 2003. The assessments used a common methodology adopted by FATF and endorsed by the Executive Boards of IMF and the World Bank. To obtain more current transnational perspectives on the capability of FIUs, we attended (as an observer) the most recent annual plenary meeting (June 30 to July 1, 2005) of the Egmont Group. At the plenary meeting, held in Washington, D.C., a summary of FIU-related assessment findings was presented. The information was derived from the results of mutual evaluations or assessments (of 29 countries) conducted from 2003 to 2005 using the common methodology endorsed by FATF, IMF, and the World Bank. We inquired about FinCEN’s efforts to update or modernize the Egmont Secure Web, which is the Internet-based communications system developed and maintained by FinCEN and used by FIUs worldwide to share or exchange information. Generally, the Egmont Secure Web is considered to be of paramount importance to the operations of FinCEN and foreign FIUs. For instance, the Egmont Group’s guidelines—Best Practices for the Exchange of Information between Financial Intelligence Units—state that, where appropriate, FIUs should use the Egmont Secure Web, which permits secure online information sharing among members. FinCEN is in the process of modernizing the system’s 1997 architecture by acquiring upgraded hardware and software. A large majority (96) of the Egmont Group’s 101 members are connected to the Egmont Secure Web. Also, we reviewed annual statistical information on international-related requests for assistance in developing or investigating cases. Specifically, for fiscal years 2002 to 2005, we obtained statistics on requests for assistance submitted by foreign FIUs to FinCEN. To the extent permitted by available data, we analyzed the statistical information on incoming requests in reference to the subject matter of the request, the country of submission, and the timeliness of FinCEN’s response to the submitting FIU. We did not analyze the quality of FinCEN’s responses to the incoming requests for assistance. However, we reviewed the results of the most recent customer feedback survey conducted by FinCEN. Also, we inquired about FinCEN’s efforts to better monitor or improve timeliness performance by developing a new case management system and assigning additional employees to the Office of Global Support, which is responsible for processing requests from foreign FIUs. Further, we inquired about FinCEN’s efforts to enhance its analytical capabilities to handle more complex cases and support the nation’s focus on detecting and preventing terrorist financing. For example, we contacted the Federal Bureau of Investigation’s (FBI) Terrorist Financing Operations Section and the Foreign Terrorist Tracking Task Force. We conducted our work from June 2005 to March 2006 in accordance with generally accepted government auditing standards. Regarding the statistical information we obtained from FinCEN—i.e., information concerning requests for assistance submitted by foreign FIUs to FinCEN— we discussed the sources of the data with FinCEN officials and worked with them to resolve discrepancies we identified with the data they provided. As resolved and presented in this report, we determined that these data were sufficiently reliable for the purposes of this review. This appendix presents summary information regarding the purposes and functioning of the Financial Action Task Force on Money Laundering and the various FATF-style regional bodies—international entities whose mission focuses on combating money laundering and terrorist financing. The summary information is derived largely from FATF’s Web site (www.fatf-gafi.org), which provides links to the regional bodies. Also, we discussed the information with Treasury Department officials. Initially, FATF was created in 1989 by the G7 nations in response to growing concerns about money laundering. However, after the events of September 11, FATF’s mission was expanded to combat the financing of terrorism. The mission of FATF consists of three principal activities—(1) setting standards for combating money laundering and terrorist financing, (2) evaluating the progress of nations in implementing measures to meet the standards, and (3) identifying and studying methods and trends regarding money laundering and terrorist financing. In fulfilling this mission, FATF is assisted by various FATF-style regional bodies that have been established since 1992. As table 5 indicates, FATF and the related regional bodies encompass member jurisdictions around the globe. FATF recommendations are designed to ensure that each nation has in place a set of countermeasures against money laundering and terrorist financing. In 1990, FATF issued its “Forty Recommendations on Money Laundering.” In October 2001, the month following the terrorist attacks in the United States, FATF issued “Eight Special Recommendations on Terrorist Financing.” More recently, in October 2004, FATF published a ninth special recommendation on terrorist financing to target cross-border movements of currency and monetary instruments. Table 6 summarizes the “Forty Recommendations on Money Laundering” and the “Nine Special Recommendations on Terrorist Financing.” Collectively, these “40 plus 9” recommendations issued by FATF are recognized as the international standards for combating money laundering and terrorist financing. Although the FATF recommendations do not constitute a binding international convention, many countries—e.g., member nations of FATF and the FATF-style regional bodies—have made a political commitment to combat money laundering and terrorist financing by implementing the recommendations. Moreover, the international community has recognized the need for monitoring to ensure that countries effectively implement the FATF recommendations. One of the means for monitoring compliance with FATF recommendations is a mutual evaluation process whereby a team of experts conducts on-site visits to assess the progress of member countries. To guide the assessment of a country’s compliance with international standards, a widely adopted methodology is used—Methodology for Assessing Compliance with the FATF 40 Recommendations and the FATF 9 Special Recommendations (updated as of February 2005). In addition to its use by FATF mutual evaluation teams, the Methodology has also been approved or endorsed by the FATF-style regional bodies and the Executive Boards of the International Monetary Fund and the World Bank. The Methodology reflects the principles and follows the structure of the FATF recommendations. For each of the recommendations, the Methodology enumerates elements (“essential criteria”) that should be present for full compliance. For instance, table 3 shows the essential criteria used for assessing implementation of FATF Recommendation 26, which calls for each nation to establish and empower a financial intelligence unit. This appendix presents summary information regarding the growth of the Egmont Group, which is an informal global association of governmental operating units created to support their respective nation’s or territory’s efforts to combat money laundering and terrorism financing. More detailed information about the purposes and functioning of the Egmont Group and its members is available at the entity’s Web site (www.egmontgroup.org). On June 9, 1995, representatives of various nations (including the United States) and international organizations met at the Egmont-Arenberg palace in Brussels, Belgium, to discuss ways to enhance mutual cooperation in combating the global problem of money laundering. A result was creation of the Egmont Group, whose members are the specialized anti-money- laundering organizations known as financial intelligence units. In attendance at the 1995 meeting were representatives of 14 of these governmental units (“disclosure-receiving agencies”) that became the first Egmont Group members. In the decade since 1995, the group’s membership has increased significantly, reaching a total of 101 jurisdictions as of July 2005 (see table 8). “A central, national agency responsible for receiving, (and as permitted, requesting), analyzing and disseminating to the competent authorities, disclosures of financial I. concerning suspected proceeds of crime and potential financing of terrorism, or II. required by national legislation or regulation, in order to combat money laundering and terrorism financing.” This definition, which was adopted in June 2004 at the Egmont Group’s plenary meeting in Guernsey, reflects an expansion of the role of FIUs to include combating terrorist financing. “FIUs should be able to exchange information freely with other FIUs on the basis of reciprocity or mutual agreement and consistent with procedures understood by the requested and requesting party. Such exchange, either upon request or spontaneously, should provide any available information that may be relevant to an analysis or investigation of financial transactions and other relevant information and the persons or companies involved.” “An FIU requesting information should disclose, to the FIU that will process the request, at a minimum the reason for the request, the purpose for which the information will be used and enough information to enable the receiving FIU to determine whether the request complies with its domestic law.” “Information exchanged between FIUs may be used only for the specific purpose for which the information was sought or provided.” “The requesting FIU may not transfer information shared by a disclosing FIU to a third party, nor make use of the information in an administrative, investigative, prosecutorial, or judicial purpose without the prior consent of the FIU that disclosed the information.” “If necessary the requesting FIU should indicate the time by which it needs to receive an answer. Where a request is marked ‘urgent’ or a deadline is indicated, the reasons for the urgency or deadline should be explained.” “FIUs should give priority to urgent requests. If the receiving FIU has concerns about the classification of a request as urgent, it should contact the requesting FIU immediately in order to resolve the issue. Moreover, each request, whether or not marked as ‘urgent,’ should be processed in the same timely manner as domestic requests for information.” “As a general principle, the requested FIU should strive to reply to a request for information, including an interim response, within 1 week from receipt in the following if it can provide a positive/negative answer to a request regarding information it has if it is unable to provide an answer due to legal impediments.” “Whenever the requested FIU needs to have external databases searched or query third parties (such as financial institutions), an answer should be provided within 1 month after receipt of the request.” “If the results of the enquiries are still not all available after 1 month, the requested FIU should provide the information it already has in its possession or at least give an indication of when it will be in a position to provide a complete answer. This may be done orally.” “FIUs should consider establishing mechanisms in order to monitor request-related information, enabling them to detect new information they receive regarding transactions, STRs , etc., that are involved in previously received requests. Such a monitoring system would enable FIUs to inform former requesters of new and relevant material related to their prior request.”
Money laundering and terrorist financing can severely affect the nation's economy and also result in loss of lives. To combat these transnational crimes, the Treasury Department (Treasury) and its component bureau, the Financial Crimes Enforcement Network (FinCEN), have key roles. Section 330 of the USA PATRIOT Act encourages the federal government to engage foreign jurisdictions in negotiations to ensure that foreign banks and financial institutions maintain adequate records to combat international financial crime. Treasury plays a lead role in facilitating such efforts. In accordance with its various responsibilities codified by section 361, FinCEN is to coordinate with its foreign counterparts--financial intelligence units (FIU). This report describes (1) Treasury's approach for negotiating with foreign jurisdictions, (2) how FinCEN has contributed to establishing FIUs in foreign countries and enhancing the capabilities of these units, and (3) what actions FinCEN is taking to maximize its performance as a global partner. With Treasury's leadership, the U.S. interagency community has been acting to accomplish the goals articulated in section 330 of the USA PATRIOT Act. In particular, according to Treasury, negotiations with foreign jurisdictions are being accomplished through U.S. interactions with the Financial Action Task Force on Money Laundering (FATF), an intergovernmental entity that has developed international standards for combating money laundering and terrorist financing. Treasury emphasized that enactment of section 330 provided a welcomed congressional endorsement of long-standing U.S. policy to combat international financial crime by negotiating with foreign jurisdictions through multilateral organizations, such as FATF. Since its formation in 1995, FinCEN has helped foreign jurisdictions establish new FIUs and improve the capabilities of existing units. The number of FIUs has jumped from 14 in 1995 to 101 currently, partly because of training and technical support provided by FinCEN and Treasury's Office of Technical Assistance and funding provided by the Department of State. Given the growth in the number of FIUs, future efforts likely will involve giving more attention to improving the capabilities of existing units, especially in reference to combating terrorist financing--an operational task now included in the formal definition of an FIU. To maximize performance as a global partner, FinCEN is taking various actions, such as assigning an analyst to the Federal Bureau of Investigation's Terrorist Financing Operations Section. Also, FinCEN is modernizing the Egmont Secure Web, which is used by FIUs worldwide to exchange sensitive case information. To enhance its responsiveness to FIUs that request case assistance, FinCEN is allocating additional staff to its Office of Global Support and also is developing a new case management system. However, in the most recent customer satisfaction survey, FinCEN invited less than one-half of FIUs to participate and received only two responses. Future surveys would need to be more inclusive and incorporate better survey development and administration practices, such as follow-up efforts to achieve higher response rates, if the surveys are to serve as a useful management information tool for monitoring and enhancing performance.
The National Guard of the United States, which performs both federal and state missions, represents about 52 percent of the armed services’ selected reserve and consists of approximately 457,000 members: about 350,000 in the Army National Guard and about 107,000 in the Air National Guard. Overall, the Army National Guard makes up more than one-half of the Army’s ground combat forces and one-third of its support forces (e.g., military police or transportation units) and has units in more than 3,000 armories and bases in all 50 states and 4 U.S. territories. Air National Guard personnel make up 20 percent of the total Air Force, with 88 flying units and 579 mission support units at more than 170 installations throughout the United States. The majority of Guard members are employed on a part-time basis, typically training 1 weekend per month and 2 weeks per year. The Guard also employs some full-time personnel who assist unit commanders in administrative, training, and maintenance tasks. The National Guard Bureau is the federal entity responsible for the administration of the National Guard. National Guard personnel may be ordered to perform duty under three different authorities: Title 10 or Title 32 of the United State Code or pursuant to state law in a state active duty status. Personnel in a Title 10 status are federally funded and under federal command and control. Personnel may enter Title 10 status by being ordered to active duty in their status as federal Reserves, either voluntarily or under appropriate circumstances involuntarily (i.e., mobilization). Personnel in Title 32 status are federally funded but under state control. Title 32 is the status in which National Guard personnel typically perform training for their federal mission. Personnel performing state active duty are state-funded and under state command and control. Under state law, the governor may order National Guard soldiers to perform state active duty to respond to emergencies, disasters, civil disturbances, and for other reasons authorized by state law. The Guard is organized, trained, and equipped for its federal missions, which take priority over state missions. As we reported in our April 2004 testimony, the National Guard’s involvement in federal operations has increased substantially since the September 11 terrorist attacks. Three days after the attacks, the President, under Title 10, authorized reservists to be activated for up to 2 years. This authority was subsequently used to activate reservists for overseas warfighting and stabilization missions in Operations Iraqi Freedom and Enduring Freedom in Afghanistan as well as for domestic missions, such as flying patrols and supporting federal civilian agencies in guarding the nation’s borders. As figure 1 illustrates, as of May 2004, about 102,800 Army and Air National Guard members—the vast majority of whom were Army National Guard members—were on active duty. Although both Army and Air National Guard activations increased in the aftermath of September 11, the Air National Guard activations had declined to pre-September 11 levels by October 2003, while Army National Guard activations continued to rise. When activated under Title 10, the National Guard is subject to the Posse Comitatus Act, which prohibits the military from law enforcement activities unless expressly authorized by the Constitution or law. The Army and the Air Force have different strategies for structuring and providing resources for their Guard components that reflect each service’s planned use and available resources. While the Army National Guard’s structure requires 375,000 personnel to be fully manned, in fiscal year 2004, the Army National Guard was authorized 350,000 soldiers resulting in many units being manned below wartime requirements. Using DOD planning guidance, Army National Guard units are provided varying levels of resources according to the priority assigned to their warfighting missions. Because much of the Army National Guard was expected to be used as a follow-on force in the event of an extended conflict, many of its units were structured with fewer personnel and lesser amounts of equipment than they would need to deploy, with the assumption that there would be time to supply additional personnel, equipment, and training before these units would be needed. For example, Army National Guard divisions, which include over 117,000 soldiers and provide the majority of the combat capability in the Army National Guard, are supplied with 65 to 74 percent of their required personnel and 65 to 79 percent of their required equipment, and are less ready for their missions. This approach to managing limited resources is referred to as “tiered readiness.” In contrast, the Air National Guard was integrated into the Air Force’s operational force and maintained at readiness levels comparable to its active component counterparts. This approach enables the Air National Guard to be ready to deploy on short notice. Since September 11, Guard members have also been activated for missions under the authority of state governors in both Title 32 and state active duty statuses. Title 32 status is generally used to train National Guard units and personnel to perform their federal mission. National Guard personnel also may perform operational (nontraining) missions in Title 32 status when authorized by federal statute. According to DOD, after September 11, the Guard performed other operational (nontraining) duties such as providing airport security in Title 32 status in response to presidential direction. National Guard personnel in Title 32 status have also provided support for events such as the G-8 Summit and the Democratic and Republican National Conventions. Also, National Guard personnel have served in a state active duty status in response to natural disasters. Additionally, the National Guard performs state missions under the command and control of the governor, with costs for these missions borne by the state. Guard missions typically performed in this status include providing assistance in response to natural disasters such as fires and storms that have not been declared federal disasters. Since September 11, governors have increasingly used this authority to activate Guard members to protect key assets in the states. Both at home and overseas, the Army and the Air National Guard have been adapting in several ways to meet the demands of current warfighting requirements, but some of the measures taken may challenge the Army National Guard’s efforts to provide ready forces for future operations. While the Army National Guard has met new warfighting requirements by retraining some units to acquire in-demand skills, tailoring others to provide particular capabilities, changing unit missions in some cases, and transferring personnel and equipment to meet combatant commander needs, these adaptations have reduced the readiness of its nondeployed units, in turn challenging the Army National Guard to prepare for future operations. The Army recognizes the need to restructure its active, Reserve, and Guard forces to respond more effectively to the new global security environment and is in the process of developing plans to make its forces more modular. However, its plans for restructuring Army National Guard forces are not finalized and do not provide detailed information on time frames for restructuring all the Guard’s units, whether the Guard’s equipment will be compatible with that of active units, or the costs of implementing these plans. The Air National Guard has also adapted to meet new warfighting requirements, but its readiness has not been as negatively affected because it has not experienced continued high usage as the Army National Guard has and because its units are more fully equipped and manned for war. The Army National Guard has been adapting to the demands of current warfighting requirements but faces future challenges in providing ready forces for future operations. The recent increased and expanded use of the National Guard illustrates the shift from the post-cold war military planning strategy, in which much of the Guard represented a force to follow the active military in the event of extended conflict, to an operational force similar to the Air National Guard. Using this strategy, the Army has generally maintained most Army National Guard units at lower readiness levels under the assumption that additional personnel and equipment would be provided prior to deployment. While the Army National Guard’s adaptations since September 11 were intended to make deploying units more useful for current operations, these adaptations have caused the overall readiness of nondeployed Guard units to decline, which may hamper the Guard’s ability to meet the requirements of future warfighting operations overseas, particularly in Iraq. To meet the high demand for Army National Guard personnel for recent operations, the Army has alerted or mobilized over one-half of the Army National Guard’s personnel since September 11. In June 2004, Army National Guard activations peaked with almost 81,000 Army National Guard members—more than one-quarter of the Army National Guard’s force—activated for overseas military operations such as in Afghanistan and Iraq. Personnel with certain skills have been in particularly high demand. For example, as of June 2004, 95 percent of military police units had deployed, with 23 percent having deployed more than once, and at least 50 percent of units with specialties such as transportation, aviation, medical, and special operations had been activated. To alleviate the stress on these forces, the Army has retrained personnel in units with less needed skills, such as field artillery, to provide skills in higher demand. For example, the Army recently changed the mission of 27 artillery units and retrained over 7,000 personnel to meet the need for additional military police and security forces. Some of these soldiers have already deployed to Iraq to perform missions such as convoy security. The Army has also adapted Guard units to meet the specific requirements of current overseas missions by tailoring units for particular purposes. In some cases, the Army took personnel with key capabilities from existing units and created new, smaller units whose personnel had skills specifically tailored to provide the capabilities required by the combatant commander. For example, the Army extracted 55 soldiers with military police skills from an armored battalion of about 600 soldiers to perform a security mission at Guantanamo Bay, Cuba. More than 35,000 Army National Guard soldiers—almost one-fifth of all soldiers utilized—deployed in these newly created, tailored units to support recent military operations. Over one-half of these tailored units (about 57 percent) were small, containing 10 or fewer soldiers. In addition to extracting key capabilities, tailored units have also been used to address personnel shortages in deploying units. The Army has also changed the mission, organization, and tactics of some deploying units, issuing them new or different equipment and adding personnel to meet combatant commander requirements. For example, the 30th Infantry (Mechanized), an enhanced separate brigade that deployed to Iraq in the spring of 2004, was directed to deploy as a motorized brigade combat team with humvees instead of with all of its assigned heavy- tracked equipment such as Bradley fighting vehicles and tanks. To accomplish this change, the unit required an infusion of personnel because “light” units require more personnel than “heavy” units. In addition, the unit underwent additional training on operating and maintaining the newly issued equipment. This unit was operating in Iraq in its new, lighter configuration at the time of this report. To ready deploying units, the Army National Guard had to transfer personnel from nondeploying units, but in doing so, it has degraded their readiness. This, in turn, challenges the Guard’s efforts to provide ready forces for future operations. To be ready to deploy, units need to have a sufficient number of soldiers who are qualified to deploy. According to the tiered-readiness policy, many National Guard units do not have all the qualified soldiers they need to be ready for their missions. However, in recent operations, the Army’s deployment goal for Guard combat units has been to be fully manned and for unit personnel to be fully qualified for their positions. To meet the requirements for units fully manned with qualified personnel, the Guard transferred qualified soldiers from nondeployed units. By July 2004, the National Guard had initiated over 74,000 personnel transfers to meet the combatant commander’s needs. There are a number of reasons that Army National Guard units may not have all of the personnel they need to deploy for their warfighting missions. First, the Army National Guard is not funded to fully man all its units to deployment standards. Second, some soldiers assigned to a unit may not have completed required training. As of May 2004, over 71,000 Army National Guard soldiers were not fully trained for their positions. Finally, soldiers may be unable to deploy overseas for personal reasons, such as medical or dental problems, family issues, or legal difficulties. As of June 2004, there were over 9,000 soldiers in the Army National Guard who were identified as nondeployable. When two of the Army National Guard’s enhanced separate brigades, some of its most ready units, were activated for rotation to Iraq in 2003, only 74 percent of their required personnel were qualified for their assigned positions and deployable, leaving a shortfall of over 2,100 soldiers that had to be filled from other units. To minimize transfers of qualified soldiers from other units, the Army Guard ordered 700 untrained soldiers between April and June 2004 to report for training so they could become fully qualified in their positions before their units were activated for overseas operations. However, the Guard has not been able to address all of its shortfalls in this manner. For example, the Army National Guard is preparing a combat division headquarters and a number of its support units for deployment to Iraq in 2005. When the 42nd Infantry Division was alerted, it lacked 783 qualified personnel—about 18 percent of the total personnel required—to meet deployment requirements. As of June 2004, the National Guard was only able to fill 415 of these positions through transfers of personnel from other units, leaving 368 positions unfilled. Army National Guard officials expect that the active Army will have to find personnel to address these shortfalls. According to National Guard officials, additional soldiers with medical, dental, legal, or family issues may be identified as nondeployable after they are mobilized, so the number of personnel needed may rise. As overseas operations continue, it is becoming increasingly challenging for the Army National Guard to ready units because the number of soldiers who have not been deployed and are available for future deployments has decreased and the practice of transferring qualified personnel to deploying units has degraded readiness of nondeployed units. Our analysis of the decline in Army National Guard readiness between September 2001 and April 2004 showed that the most frequently cited reasons for the decline in personnel readiness of nondeployed units were that personnel were already deployed or not available for deployment. Of the almost 162,000 soldiers who are available for future deployments, almost 36,000 are in nondeployable units that provide maintenance, medical, and legal support to the Army National Guard. Approximately 9,000 additional soldiers have medical or other conditions that prevent deployment, and about 28,000 soldiers will need required training before they will be available for deployment. This leaves approximately 89,000 soldiers who are currently available to deploy for overseas operations. Because DOD expects the high pace of operations to continue for the next 3 to 5 years and estimates that operations will require 100,000 to 150,000 National Guard and reserve personnel each year, the Army National Guard will likely have to alert and mobilize personnel who have been previously deployed. Because the combatant commander has required Army National Guard units to have modern, capable, and compatible equipment for recent operations, the Army National Guard adapted its units and transferred equipment to deploying units from nondeploying units. However, this adaptation has made equipping units for future operations more challenging. The Army equips units according to when it expects them to be needed in combat; thus, the “first to fight” units are given the priority for modern equipment. Based on post-cold war plans, it was assumed that most Army National Guard units would follow active units and that there would be sufficient time to provide them with the equipment they need for their missions before they deployed. However, when National Guard units were alerted for recent operations, they generally did not have sufficient amounts of equipment or equipment that was modern enough to be compatible with active units and to meet combatant commander requirements. For recent operations, the Army National Guard has had to fill the shortages of equipment among deploying units by transferring equipment from nondeploying units. National Guard data showed that in order to ready units deploying to support operations in Iraq between September 2002 and May 2004, the National Guard transferred over 18,000 night vision goggles, 1,700 chemical monitors, 900 wheeled vehicles, 700 radios, and 500 machine guns, among other items, from nondeploying units. As a result, by June 2004, the Army National Guard had transferred more than 35,000 pieces of equipment and had critical shortagesof about 480 different types of items, including machine guns and heavy trucks. In total, the Army National Guard’s nondeployed force lacks 33 percent of its essential items and, as of June 2004, its stocks had been depleted to the point where it had to request that the Army provide about 13,000 pieces of equipment for its deploying units. Equipment shortages were worsened when the combatant commander and the National Guard Bureau barred Army National Guard units from deploying with items that were incompatible with active Army equipment or that could not be supported with spare parts in the area of operations. For example, Army National Guard units equipped with 20 to 30-year-old radios were barred from taking them to the Iraqi area of operations because they cannot communicate with the Single Channel Ground Air Radio System (SINCGARS) used by other Army units. Likewise, some of the older rifles the Guard uses for training have been barred because they use different ammunition than those of the active Army units. Moreover, Guard units alerted for the earlier deployments were not equipped with the most modern body armor and night vision goggles that the combatant commander subsequently required for deploying units. After units were identified for mobilization and deployment, the Army took some steps to augment existing Guard equipment using supplemental wartime funding. Our analysis of DOD data showed that the equipment readiness of nondeployed units has continued to decline and, as overseas operations continue, it has become increasingly challenging for the National Guard to ready deploying units to meet warfighting requirements. As reported by the National Guard, 87 percent of the 1,527 reporting units in fiscal year 2001 met their peacetime equipment readiness goals, which are often lower than wartime requirements. By fiscal year 2002, only 71 percent of the nondeployed reporting units met their peacetime equipment goals. The report attributed this decrease in readiness posture to equipment shortages and transfers among nondeployed units to fill shortages in other units. Initially, the Guard managed these transfers so that nondeploying units shared the burden of providing resources to deploying units and could remain at their planned readiness levels. However, this became increasingly difficult as the number of activations mounted, and, in November 2003, the Director of the Army National Guard issued a memorandum to the states directing them to transfer equipment to deploying units regardless of the impact on the readiness of remaining units. The Army and the National Guard have recognized that the post-September 11 security environment requires changes to the Guard’s structure and an improvement in its readiness posture. However, in the near term, the Army National Guard will have difficulty improving its readiness for projected operations over the next 3 to 5 years under current plans, which assume the Guard will be funded at peacetime readiness levels. Over the longer term, DOD, the Army, and the National Guard have initiated, but not completed, several restructuring efforts, including moving some positions with high-demand skills out of the Guard and into the active force, creating new standardized modular units that are flexible to respond to combatant commander needs, and establishing predictable deployments for units. To improve readiness, the Army National Guard seeks to increase the amount of full-time support and qualified personnel in its units. However, these measures will require additional funding. At this time, it is not clear whether these planned actions will fully address the difficulties the Army National Guard has experienced in supplying the numbers and types of fully ready forces needed for the global war on terrorism. The Guard may be challenged in the near term to deploy units and sustain the high pace of operations required by the global war on terrorism with its current resources. While the costs of activated Army National Guard units in wartime are borne by the active Army with funds provided through supplemental appropriations, for recent operations the Guard has had to ready its forces for mobilization using its existing resources. The Army National Guard received $175 million in supplemental funding in fiscal year 2003, for personnel and operation and maintenance, but it did not receive additional fiscal year 2004 funding to ready nondeployed units so they can train and gain proficiency before they are mobilized. In fiscal year 2004, $111 million was reprogrammed from Army National Guard personnel to Army National Guard operation and maintenance appropriation accounts to support requirements for units before they were mobilized. These funds were available because mobilized Army National Guard personnel are paid by the active Army military personnel appropriation. The 2005 President’s budget submission and long-term funding plan are still based on the tiered-readiness approach. Because the Army is in the process of developing a new budget and long-term funding plan, it is not clear at this time whether future budget submissions will include funding to support increased readiness levels. For the long term, DOD and the Army are changing some units’ missions to increase the availability of certain high-demand Army National Guard units, such as military police and transportation units. They have also taken steps to rebalance skills among the active and reserve forces to decrease the burden of repeated deployments on reserve personnel who have skills that are in great demand. To make more efficient use of its forces, DOD is also planning to move military personnel out of positions involving duties that can be performed by civilians or contractors and into high-demand specialties, as well as taking advantage of technological advances to reduce personnel needs. However, these initiatives are in the early stages of implementation and the extent to which they will alleviate the strain on Army National Guard forces due to the continuing high pace of operations is uncertain. In April 2004, the Army published The Army Campaign Plan that sets out some specific objectives and assigns responsibilities for actions to be taken to plan and execute ongoing operations and transform forces for the future. A key element of the Army’s plan to transform its forces, including National Guard units, is to restructure into “modular” units that can be tailored to the specific needs of combatant commanders in future operations. After restructuring, the Army National Guard expects to have 34 smaller, lighter brigades instead of its current 38 brigades. Current plans call for converting Army National Guard units as they return from overseas operations into brigades that share a common basic organization with their active counterparts by 2010. Further, the Army has a goal of restructuring its forces so that units will be authorized the qualified personnel they require. However, the Army’s current plans do not completely address how the Guard’s equipment will be modernized to make it compatible with active Army equipment or include a detailed schedule and funding needs for restructuring all Guard units, including support units. In addition, one of the Army National Guard’s initiatives to improve readiness by increasing the amount of full-time support personnel within its units is still based on its tiered-readiness model, which resources some Guard units well below requirements. With this initiative, the Army National Guard plans to increase the percentage of full-time personnel gradually to about 71 percent of the personnel it needs by 2012. Full-time Guard members enhance unit readiness by performing tasks such as monitoring soldiers’ readiness, recruiting and training personnel, and maintaining aircraft, supplies, and equipment. However, for fiscal year 2003, the Army National Guard was only funded for 59 percent of the full-time personnel it needs to be fully manned, as compared to the Air National Guard, which is staffed at 100 percent of its required full-time support personnel. Without sufficient full-time personnel, these tasks, which are critical to unit readiness, suffer. The Army National Guard also has plans to increase the number of qualified personnel in each unit by spreading its soldiers over fewer, in some cases smaller, units. According to Army National Guard officials, using this strategy could increase the number of qualified personnel to an estimated 85 percent of unit requirements. However, Army deployment goals for combat units are for 100 percent of deploying soldiers to be qualified in their positions. Therefore, the Guard will likely still need to transfer personnel when units are called to deploy. To avoid overtaxing the force and improve deployment predictability, the Army has developed a proposal to establish a rotational deployment cycle for its Army National Guard units that would meet the Secretary of Defense’s goal of no more than one deployment every 6 years. In conjunction with this proposal, preliminary Army plans propose equipping Guard units that are 4 to 5 years away from an expected deployment well below wartime readiness standards. However, this model may be difficult to achieve while the high pace of operations continues. The Air National Guard, like the Army National Guard, has also adapted to meet new warfighting requirements since September 11. It made several adjustments to accommodate the higher pace of operations, including extending tours of duty for some Guard personnel, calling up others earlier than expected, and recently extending its rotational cycle to lengthen the amount of time personnel are available for deployment. However, the demands of ongoing operations have not been as detrimental to the Air National Guard for two reasons. First, along with the Air Force Reserve, the Air National Guard is funded to maintain readiness levels similar to that of the active Air Force and is expected to be able to deploy within 72 hours. Second, the Air National Guard has not been required to sustain the same high level of activations as the Army National Guard. Air National Guard activations declined to pre-September 11 levels of about 10,000 by October 2003, and have since declined to about 6,000, while the Army National Guard’s activations have continued to rise. Between 2001 and 2003, the Air National Guard unit readiness declined as a result of its high utilization of personnel and equipment, but Congress provided additional funding to stabilize Air National Guard readiness. To meet increased personnel requirements during the initial phases of current operations, Air National Guard officials activated and deployed personnel earlier than planned under their standard rotational deployment cycle. In January 2003, Air Force officials said that over 320 personnel, including some Air National Guard members, deployed about 45 days earlier than usual. In addition, the Air Force also disrupted the normal rotation cycle by extending tour lengths to meet increased requirements. Air Force officials extended the duty tours of selected Air National Guard personnel from the usual 90 days up to 179 days. For example, during the preparation phase for Operation Iraqi Freedom the Air Force extended the tours of almost 2,400 personnel, including some Air National Guard personnel. To accommodate ongoing operational requirements, in June 2004, the Air Force announced that most Air National Guard personnel scheduled to deploy in future cycles would spend 120 days in the deployment phase of their cycle. To accommodate the increased tour lengths, the new rotational cycle will be 20 months in length, and Guard personnel will train for 16 months and be eligible for deployment for 4 months. Overall, Air National Guard unit readiness has declined since September 2001 due to the increased demands for people and usage of equipment. Our analysis of DOD data showed that commanders attributed this decline in readiness primarily to personnel and equipment shortages, damaged or inoperative equipment, and incomplete training. In addition, Air National Guard officials in states we visited told us that meeting current operational demands has resulted in fewer aircraft available to be used for training at home and increased maintenance requirements on aircraft being used in current operations. However, Air National Guard officials told us that equipment readiness rates have remained steady during fiscal year 2004, and they attributed this stabilization to supplemental funding of $20 million in fiscal 2003 and $214 million in fiscal year 2004 for operation and maintenance activities. While Army and Air National Guard forces have, thus far, supported the nation’s homeland security needs, the Guard’s preparedness to perform homeland defense and civil support missions that may be needed in the future cannot be measured because its role in these missions is not defined, requirements have not been identified, and standards have not been developed against which to measure preparedness. Since September 11, the Guard has performed a number of missions, including flying patrols over U.S. cities and guarding critical infrastructure. However, state and National Guard officials voiced concerns about preparedness and availability of Guard forces as overseas deployments continue at a high pace. Even though plans and requirements for the homeland missions the Guard will support are not fully developed, DOD and the National Guard Bureau have taken some actions to address potential needs. Since September 11, Army and Air National Guard forces have supported a range of homeland security missions, primarily with the equipment DOD has provided for their federal missions. For example, Army National Guard units helped guard the nation’s borders and airports in the aftermath of September 11, and they continue to guard key assets such as nuclear power plants. Also, the Army National Guard is currently providing security at U.S. military installations, including about 5,500 Army National Guard soldiers guarding Air Force bases in the United States as of June 2004. Similarly, Air National Guard units continue to fly patrol missions over the United States. We performed case studies in four states to examine how the Guard has supported new homeland security missions. In all four states we visited (New Jersey, Oregon, Georgia, and Texas), Guard officials reported that their units supported homeland tasks for both state governors and federal authorities. The following are examples of how the Army National Guard has supported homeland missions since September 11: The New Jersey Army National Guard provided security for bridges, tunnels, and nuclear power plants for the state governor during 2003 and continues to provide security at two nuclear power plants. The Oregon Army National Guard provided security at federal installations, such as the Umatilla Chemical Depot and Fort Lewis, Washington, in 2002 and 2003. The Texas Army National Guard performed border security, assisting U.S. Customs agents from October 2001 to November 2002, and provided security at Air Force installations and state nuclear power plants from October 2001 to October 2002. The Georgia Army National Guard provided airport security almost immediately after September 11 and was still guarding Army bases and Air Force facilities at the time of our visit in December 2003. The Air National Guard has also been called on to perform new missions, such as air patrols and providing radar coverage for the continental United States. Air National Guard units in the states we visited played key roles in homeland defense missions. For example: The 177th Fighter Wing in New Jersey, which is strategically located near major cities such as New York, Philadelphia, Boston, Baltimore, and Washington, D.C., took on the additional mission of flying patrols over these cities. Through early November 2003, the 177th had flown 1,458 air patrol missions. The 147th Fighter Wing in Texas flew a total of 284 patrol missions over New York City and Washington, D.C., between December 2001 and March 2002. Since September 11, the unit has also flown patrols over Houston, the Gulf Coast, and in support of special events such as the Super Bowl and the Winter Olympics. Despite the Guard’s response to homeland needs, officials in all of the states we visited expressed concerns about their Guards’ preparedness for homeland security missions, especially given the high level of National Guard deployments to operations outside of the United States. As figure 2 illustrates, at the beginning of June 2004, one-half of the 50 states and 4 territories had more than 40 percent of their Army National Guard forces alerted, mobilized, or deployed for federal missions. Montana and Idaho both had high numbers of soldiers alerted, mobilized, or deployed with 80 percent and 96 percent, respectively. 13 Vt. 78 N.H. 34 Mass. 47 R.I. 29 Conn. 59 N.J. 28 Del. 18 Md. 29 D.C. 30 P.R. 24 V.I. Figure 3 illustrates the percentage of Air National Guard personnel who volunteered or were mobilized or deployed as of the end of May 2004. In contrast to the Army National Guard, only two states, New Hampshire and Nevada, had more than 20 percent of their Air National Guard mobilized or deployed, while 43 of the 54 states and territories had less than 10 percent of their Air National Guard activated. 8 Vt. 26 N.H. 3 Mass. 8 R.I. 0 Conn. 4 N.J. 19 Del. 3 Md. 0 D.C. 2 P.R. 0 V.I. Some Guard officials also expressed concerns that their states’ Guards had not received additional federal funding to support homeland security missions, even as homeland security missions are continuing and as the homeland security advisory system threat level has risen. While the states have funded some homeland security activities, such as guarding critical infrastructure, and purchased some equipment, such as decontamination equipment, officials said that homeland security requirements must compete with other needs in limited state budgets. Furthermore, state officials said that the Guard is not generally eligible for funding from the Department of Homeland Security because its grants are limited to “first responders,” such as police or firefighters. Officials in all four states we visited raised concerns about their Guards’ readiness for homeland security and other state missions. For example: New Jersey Guard units that responded to a terrorist threat alert in December 2003 reported that they lacked some essential equipment, such as humvees, night vision equipment, cold weather gear, chemical protective suits, and nerve agent antidote. The state paid for some essential equipment for its Guard forces during this time on an emergency basis. At the time of our visit, New Jersey was preparing to deploy large numbers of its state Guard personnel overseas and was determining how it would respond to another terrorist threat with almost 60 percent of its forces unavailable. Georgia officials told us that hosting the 2004 International Economic Summit of Eight Industrialized Nations, known as the G-8 Summit, in June 2004, increased Georgia’s security missions such as aerial reconnaissance and surveillance, at a time when its Army National Guard aviation units were deployed overseas. National Guard units from 12 other states participated. The state also received federal funds for the G-8 Summit, which reimbursed the state for costs of activating Guard personnel. In addition, recognizing the Guard’s unique role in homeland security, active component forces were commanded by a National Guard general for this operation—a new arrangement designed to provide unity of command for homeland missions that defense officials stated might serve as a model for the future. In 2002, the state of Oregon called up more than 1,400 Army National Guard soldiers to respond to one of the worst forest fire seasons in a century. Oregon officials said that because many of the state’s Guard forces and equipment were deployed and the state had only limited engineering capability left, it would not be able to provide the same level of support to civilian authorities if similar circumstances were to occur. All of the Texas Guard’s aviation assets that would be needed to fight fires and all of the state’s military police were deployed at the time of our visit. However, Texas officials said that the state had been able to meet their homeland security needs, even at the height of its Guard’s overseas deployments, because its largest Army National Guard unit had not been fully deployed and, as a large state, it had ample state emergency response capability. States are developing plans and examining resources currently available to them to address homeland security needs. For example, each state is developing a plan for protecting its infrastructure sites. Additionally, most states have entered into mutual assistance agreements that may provide them access to another state’s National Guard forces in times of need. These agreements, known as Emergency Management Assistance Compacts, are typically used to facilitate access to additional forces for natural disaster response. However, it is not clear whether these arrangements will always meet the states’ needs for forces or capabilities for homeland security because, under Emergency Management Assistance Compacts, states can withhold their forces if they are needed in their home state. This situation occurred in one of our case study states. According to state officials, New Jersey has faced an elevated terrorist threat due to specific threats against the state as well as its proximity to New York City. The officials said they requested access to another state’s Weapons of Mass Destruction Civil Support Team on three occasions prior to 2004. On two occasions, the request was not granted because officials in the team’s home state determined that it was needed at home. When New Jersey made a third request, in response to a specific and credible terrorist threat, access was approved. DOD’s Office of the Assistant Secretary of Defense for Homeland Defense and the Northern Command are charged with leading DOD’s efforts in homeland defense, and while they have taken some actions, they have not completed developing requirements or preparedness standards and measures for the homeland missions in which the National Guard is expected to participate. DOD plans to publish a comprehensive strategy for the homeland defense. Until the strategy is finalized, the Northern Command will not be able to complete its planning to identify the full range of forces and resources needed for the homeland missions it may lead or civil support missions in which active or reserve forces should be prepared to assist federal or state civilian authorities. Without this information, policy makers are not in the best position to manage risks to the nation’s homeland security by targeting investments to the highest priority needs and ensuring that the investments are having the desired effect. While the Guard has traditionally undertaken a wide variety of missions for states, it is organized, trained, and equipped to perform a warfighting mission. DOD measures the readiness of its forces for combat missions by identifying the personnel and equipment required to successfully undertake the mission and assessing the extent to which units have the resources they need. Typically, Guard forces are expected to perform civil support missions with either the resources supplied for their warfighting missions or the equipment supplied by the state. Guard officials said that units have supported state missions with capabilities such as aviation, military police, medical, and others, as needs have arisen. However, in the post-September 11 environment, Guard forces may be expected to perform missions that differ greatly from their warfighting or traditional state missions and may require different equipment, training, and specialized capabilities than they currently possess. Homeland missions, such as providing large-scale critical infrastructure protection or responding to weapons of mass destruction events in the United States, could differ substantially from conditions expected on the battlefield or from more traditional state missions, such as responding to natural disasters or civil disturbances. For example, New Jersey units that responded to a terrorist threat alert in December 2003 reported that they lacked some essential equipment such as humvees, night vision equipment, cold weather gear, chemical protective suits, and nerve agent antidote. In addition, state officials said that other items, such as pepper spray, which are not routinely supplied to all types of units for their warfighting mission, might be useful for potential homeland missions involving crowd control. New Jersey subsequently paid for some essential equipment for its forces during this time on an emergency basis. Until the requirements for personnel and equipment are better defined, DOD cannot measure how prepared Guard forces are for the missions they may be called to undertake. To finalize its plans, the Northern Command will have to coordinate with federal agencies, such as the Department of Homeland Security, and state emergency management offices to ascertain their needs for Guard support. Furthermore, it will have to balance the needs for National Guard forces at home and overseas. Since 1999, DOD has maintained full-time Guard forces in Weapons of Mass Destruction Civil Support Teams that are dedicated to homeland security missions. These teams are comprised of 22 full-time personnel and are maintained at the highest readiness levels and can respond rapidly to support civil authorities in an event involving a weapon of mass destruction. Their role is to assist local officials in determining the nature of the attack, provide medical and technical advice, and help to identify follow-on federal and state assets that might be needed. Congress has authorized at least one team for each state and territory. Currently, 32 teams are fully operational, with the remaining 23 estimated to be operational by 2007. These teams are federally funded and trained but perform their mission under the command and control of the state governor. The National Guard Bureau has proposed some additional initiatives that are in varied stages of implementation, which are intended to further prepare states for meeting homeland security needs. For example, the National Guard Bureau has: Set up a pilot program in April 2004 in 6 states (California, Colorado, Georgia, Minnesota, New York, and West Virginia) to jointly assess with state officials critical infrastructure protection policy, tactics, procedures, and implementation. Established a regional task force to provide the capability for 12 states to respond to a weapon of mass destruction event. These Guard forces are designed to locate and extract victims from a contaminated environment, perform mass casualty/patient decontamination, and provide medical triage and treatment in response to one of these events. The 12 participating states are New York, Massachusetts, Pennsylvania, West Virginia, Illinois, Missouri, Florida, Texas, Colorado, California, Washington, and Hawaii. Proposed an initiative to distribute Guard personnel with key capabilities, including aviation, military police, engineering, transportation, medical, chemical, and ordnance, to each state and territory. When stationing personnel with these capabilities in a state or territory is not possible, the National Guard Bureau will try to maintain all capabilities within the geographical region. Developed a proposal for rotational deployment of Guard forces that would enable each state to retain 50 percent of its Guard in the state to respond to homeland security missions and to support civil authorities, while 25 percent of the state’s forces deploy, and 25 percent prepare for future deployments. While these initiatives would provide enhanced capability for homeland security in the National Guard, they will require coordination with the Army and the Air Force as well as with the states, and they might face implementation challenges. For example, the Chief of the National Guard Bureau has developed a proposal to station a mix of forces with skills useful for state missions within each state and presented the proposal to state governors. However, the Army, the Air Force, Congress, and others are also involved in making such decisions. Similarly, implementing the National Guard’s proposal to retain 50 percent of a state’s Guard at home for homeland security and civil support missions has not been implemented and could be difficult to achieve during periods of high-military operations. Officials from the U.S. Army Forces Command, the Army command that selects Army Guard personnel for federal activation, said that while they try to minimize the impact of federal mobilizations on the states, this becomes more and more difficult as the level of federal activations increases. The September 11 terrorist attacks and the global war on terrorism have placed new demands for ready forces on the National Guard—especially the Army National Guard—for overseas, homeland security, and homeland defense operations. At the same time, it is apparent that the Army National Guard’s structure as a follow-on force to the active Army is not consistent with its current use as an operational force. The current demands for large numbers of fully manned and equipped forces to support overseas operations have forced the Guard to transfer personnel and equipment from nondeploying units to deploying units, degrading the readiness of the nondeployed units. This continued decline in readiness of nondeployed units hinders the Army National Guard’s ability to continue to provide the ready forces in the short term that DOD estimates will be needed to meet operational needs over the next 3 to 5 years. However, DOD’s current budget continues to fund the Guard at peacetime levels, and it is not clear whether future budgets will include funding to improve readiness. In the longer term, while DOD is reevaluating its strategy for the new security environment, it is important for it to decide what the role of the National Guard will be in the 21st century. This decision is important because it will determine the missions for which the Guard will have to prepare, the number and types of units it will need, and how much personnel, equipment, and training it should be provided. Furthermore, until DOD establishes the Guard’s role in the post-September 11 environment and develops a strategy to prepare its forces to meet new demands, it cannot be sure that it is best managing risks by investing its resources to target the highest priority needs and Congress, in turn, will not have detailed information on which to base funding and policy decisions. Continuing to structure and fund the Guard under current policy will result in continued personnel transfers and readiness declines for its units that may hamper its ability to sustain much needed Guard involvement in the global war on terrorism over the long term. At the same time that the Guard’s overseas missions have increased— reducing the personnel and equipment available for state missions— homeland security needs have also increased. However, DOD has not fully defined what role the National Guard will have in the homeland missions DOD will lead or support and how it will balance this role with its increased participation in overseas operations. Absent a clearly defined role for all its homeland missions, the Guard cannot identify requirements for successfully executing this role and the standards and measures it will use to assess preparedness for all its homeland missions. Until it has these standards and measures, DOD does not have the means to determine whether the Guard is prepared to meet homeland security needs with its current structure and assets. As such, policy makers are not in the best position to manage the risks to the nation’s homeland security by targeting investments to the highest priority needs and ensuring that they are having the desired effect. We recommend that the Secretary of Defense direct the Secretary of the Army to develop and submit to Congress a strategy that addresses the Army National Guard’s needs for the global war on terrorism, including the Army National Guard’s anticipated role, missions, and requirements for personnel and equipment in both the near and long term. The near-term portion of the strategy should address the current decline in readiness for overseas missions and the Army National Guard’s plans to provide the ready forces needed for the global war on terrorism over the next 3 to 5 years. Specifically it should include an analysis of how support for current operations will affect the readiness of nondeployed Army National Guard forces for future overseas and domestic missions and a plan to manage the risk associated with the declining readiness of nondeployed Army National Guard forces, including identifying funding for any personnel and equipment required to mitigate unacceptable levels of risk. The long-term portion of the strategy should detail how the Army plans to restructure and provide the Guard resources—personnel, equipment, and training—consistent with its 21st century role, including how the Army National Guard will be restructured to support future missions and ensure operational compatibility with active forces and the time frames for implementing restructuring actions, the resources needed to achieve compatibility with active forces and the appropriate level of readiness for their missions. As DOD completes its homeland defense strategy and the Northern Command refines its concept and operational plans for homeland defense and support to civil authorities and defines requirements, we recommend that the Secretary of Defense direct the Under Secretaries of Defense for Policy and for Personnel and Readiness, in consultation with the Chairman of the Joint Chiefs of Staff, Commander of the U.S. Northern Command, Commander of the U.S. Pacific Command, the Chiefs of the Army and the Air Force, the Chief of the National Guard Bureau, and appropriate officials in the Department of Homeland Security, to take the following four actions: Establish the full range of the National Guard’s homeland missions, including those led by DOD and those conducted in support of civilian authorities. Identify the National Guard’s capabilities to perform these missions and any shortfalls in personnel, equipment, and training needed to perform these missions successfully. Develop a plan that addresses any shortfalls of personnel, equipment, and training, assigns responsibility for actions, establishes time frames for implementing the plan, and identifies required funding. Establish readiness standards and measures for the Guard’s homeland security missions so that the readiness for these missions can be systematically measured and accurately reported. The Assistant Secretary of Defense for Reserve Affairs provided written comments on a draft of this report. The department generally agreed with our recommendations and cited actions it is taking to implement them. DOD’s comments are reprinted in their entirety in appendix II. DOD partially agreed with our recommendation that DOD develop and submit to Congress a strategy that addresses the Army National Guard’s short- and long-term needs for the global war on terrorism, including the Army National Guard’s role, missions, and requirements for personnel and equipment, and its plans to manage the risk associated with the declining readiness of nondeployed Army National Guard forces. In its comments, DOD said that the Army has conducted the recommended analysis, developed a plan as outlined in the Army Campaign Plan, and communicated its plan to numerous members of Congress. We agree that the Army Campaign Plan is a significant step in planning to address National Guard readiness problems because it identifies goals and objectives and assigns responsibilities for actions to plan for transforming its forces. However, we believe the Army Campaign Plan does not fully meet the intent of our recommendation because it lacks specificity about how the Army will address the readiness of nondeployed Army National Guard forces in the near term, how all Guard units will be converted to the modular design, or how the Guard’s equipment will be modernized to make it compatible with active Army equipment. Furthermore, DOD has not identified the funding needed for restructuring all Guard units, including support units. Therefore, we believe the Army should develop more detailed plans to fully implement our recommendation. In its comments, DOD said that the Army agrees that it should continue its analysis to identify and minimize readiness impacts to the current force. DOD concurred with our recommendation to establish the full range of the National Guard’s homeland missions, to identify the capabilities needed to perform those missions and develop a plan to address any shortfalls, and to establish readiness standards and measures for the Guard’s homeland security missions. However, in its comments, DOD said it would take a different approach to accomplishing the tasks than we recommended. Rather than having the Assistant Secretary for Homeland Defense take the lead in all four areas as we recommended, DOD said that the Under Secretary of Defense for Policy and the Under Secretary of Defense for Personnel and Readiness, working in close coordination, should take the lead in implementing the actions we recommended. We believe the approach DOD proposes meets the intent of our recommendation, and we have modified the wording of our recommendation to reflect the proposed change in organizational responsibilities. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 7 days from the date of this letter. We will then send copies to the Secretary of Defense; the Secretaries of the Army and the Air Force; the Chief, National Guard Bureau; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-4402. Major contributors to this report are listed in appendix III. We interviewed officials in the Army National Guard, the Air National Guard, the National Guard Bureau, and the Department of the Army and Department of the Air Force headquarters. We supplemented this information with visits to several Department of Defense (DOD) offices, including the Office of the Assistant Secretary of Defense for Reserve Affairs; the Office of the Chairman, Joint Chiefs of Staff; and Joint Force Headquarters, Homeland Security. We also developed case studies of recent federal and state National Guard operations in four states—Georgia, New Jersey, Oregon, and Texas. The states were chosen to represent a mix of geographic areas, Air and Army National Guard units with different specialties, and units that had been or expected to be activated for state or federal missions. In each state we visited the Adjutant General and offices within the Joint National Guard headquarters. We also interviewed leaders from a field artillery battalion, an armor battalion, two enhanced brigades, an air control wing, an airlift wing, an air-refueling wing, and three fighter wings. To examine the National Guard’s warfighting requirements in this post-September 11, 2001 security environment, we obtained and analyzed data on state and federal activations of the Army and the Air National Guard before and after September 11, 2001. We supplemented this with interviews, briefings, and documentation from officials from the four case study states and from the National Guard Bureau, the U.S. Army Forces Command, First Air Force, and the U.S. Air Force Air Combat Command and Air and Space Expeditionary Force Center. To examine the ways in which the National Guard has adapted for its new missions, we interviewed officials in the four case study states and officials at Army mobilization stations at Fort Hood, Texas, Fort Benning, Georgia, and Fort Dix, New Jersey, and at the First and Fifth Continental United States Armies. To identify Guard usage trends and stressed capabilities, we analyzed DOD’s personnel tempo database, Army National Guard and Air National Guard data on the types of units mobilized, and information from the Army National Guard on the transformation of field artillery and other support units into military police and security force units. We obtained information on personnel and equipment transfers from the National Guard Bureau and information on equipment shortages from DOD publications and reports. We reviewed equipment data, interviewed data sources, and obtained information on data collection methods and internal control measures applied to the data. We determined the equipment data were sufficiently reliable for our objectives. We also reviewed documents on planned changes to the Army Guard’s force structure, such as the Army Campaign Plan and the Army Transformation Roadmap. We also discussed personnel, training, and equipment issues with unit, state, Guard Bureau, and mobilization station officials and force providers. To assess the National Guard’s emerging homeland security needs, in each of the four case study states we interviewed Guard homeland security officials and leaders from Army and Air National Guard units with recent homeland security experience. We also met with officials from the National Guard Bureau (Homeland Defense), the Department of the Army, three Weapons of Mass Destruction Civil Support Teams, the Air Combat Command and Air and Space Expeditionary Force Center, the Army Forces Command, the Office of the Deputy Assistant Secretary of Defense for Reserve Affairs (Military Assistance to Civilian Authorities) (now part of the Assistant Secretary of Defense (Homeland Defense)), the Joint Director of Military Support, and the Joint Task Force, Civil Support. We also obtained information from the U.S. Joint Forces Command and reviewed unclassified, publicly available documents from the U.S. Northern Command. In addition, we reviewed the National Guard’s role in rotation plans for future operations. We identified the challenges facing DOD, the states, and Congress in organizing and equipping the Guard for both overseas and homeland security missions based upon our analysis of the Guard’s current status and discussions with National Guard officials. We conducted our review between April 2003 and September 2004 in accordance with generally accepted government auditing standards and determined that the data were sufficiently reliable to answer our objectives. For example, we interviewed data sources about how they ensured their own data accuracy and reviewed their data collection methods, standard operating procedures, and other internal control measures. We reviewed available data for inconsistencies, and, when applicable, performed computer testing to assess data validity and reliability. In addition to the persons named above Suzanne Wren, Barbara Gannon, James Lewis, Tina Morgan, Jacquelyn Randolph, V. Malvern Saavedra, Alissa Czyz, Kenneth Patton, Jennifer Popovic, and Jay Smale also made major contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The September 11, 2001, terrorist attacks and the global war on terrorism have triggered the largest activation of National Guard forces since World War II. As of June 2004, over one-half of the National Guard's 457,000 personnel had been activated for overseas warfighting or domestic homeland security missions in federal and state active duty roles. In addition to increased usage, the Guard has also experienced long deployments and high demand for personnel with specific skills, such as military police. The high pace of operations and the Guard's expanded role since September 11 have raised concerns about whether the Guard is capable of successfully performing its multiple missions within existing and expected resource levels, especially given the challenges it faces in meeting future requirements. GAO was asked to assess the extent to which the Guard is: (1) adapting to meet warfighting requirements in the post-September 11 security environment and (2) supporting immediate and emerging homeland security needs. The Army and the Air National Guard have begun adapting their forces to meet new warfighting requirements since the September 11 attacks, but some measures taken to meet short-term requirements have degraded the readiness of nondeployed units, particularly in the Army National Guard. To deploy ready units for overseas missions, the Army National Guard has had to transfer equipment and personnel from nondeploying units. Between September 11, 2001, and July 2004, the Army National Guard had performed over 74,000 personnel transfers. Similarly, as of May 2004, the Army National Guard had transferred over 35,000 equipment items to prepare deploying units, leaving nondeployed Army National Guard units short one-third of the critical equipment they need for war. The Army has developed plans, such as the Army Campaign Plan, to restructure its forces to better prepare them for future missions. However, it has not finalized detailed plans identifying equipment needs and costs for restructuring Guard units. Moreover, the Army is still structured and funded according to a resourcing plan that does not provide Guard units all the personnel and equipment they need to deploy in wartime, so the Army National Guard will be challenged to continue to provide ready units for operations expected in the next 3 to 5 years. The Air National Guard is also adapting to meet new warfighting requirements, but it has not been as negatively affected as the Army National Guard because it has not been required to sustain the same high level of operations. In addition, the Air National Guard generally maintains fully manned and equipped units. While the Army and the Air National Guard have, thus far, also supported the nation's homeland security needs, the Guard's preparedness to perform homeland security missions that may be needed in the future is unknown because requirements and readiness standards and measures have not been defined. Without this information, policy makers are not in the best position to manage the risks to the nation's homeland security by targeting investments to the highest priority needs and ensuring that the investments are having the desired effect. Since September 11, the Guard has been performing several unanticipated homeland missions, such as flying patrols over U.S. cities and guarding critical infrastructure. However, states have concerns about the preparedness and availability of Guard forces for domestic needs and natural disasters while overseas deployments continue at a high pace. The Department of Defense (DOD) plans to publish a comprehensive strategy for homeland security missions that DOD will lead. However, DOD has not reached agreement with multiple federal and state authorities on the Guard's role in such missions. Also, the National Guard Bureau has proposed initiatives to strengthen the Guard's homeland security capabilities. However, many of these initiatives are at an early stage and will require coordination and approval from other stakeholders, such as DOD and the states. In the absence of clear homeland security requirements, the Guard's preparedness to perform missions at home cannot be measured to determine whether it needs additional assets or training.
Since 2006, the Army has relied on the practice known as reset to restore equipment readiness through a variation of repair, recapitalization, and replacement of equipment activities. The Army defines reset as: “Actions taken to restore equipment to a desired level of combat capability commensurate with a unit’s future mission. It encompasses maintenance and supply activities that restore and enhance combat capability to unit and pre-positioned equipment that was destroyed, damaged, stressed, or worn out beyond economic repair due to combat operations by repairing, rebuilding, or procuring replacement equipment.” Figure 1 provides the appropriations typically used to fund various kinds of reset and definitions of the four categories that make up the Army’s reset activities. In 2007, the Army established the Reset Task Force to monitor and track reset requirements and expenditures to ensure that reset dollars are properly managed and reported, and to monitor the status of reset to include repair, replacement, and recapitalization. This task force is chaired by the Office of the Deputy Chief of Staff, G-8 (Programs) Force Development Directorate, which has overall responsibility for the preparation of monthly congressional reset reports, and for reporting on the status of the Army reset program to Congress and the Department of the Army. In December 2009, DOD issued Resource Management Decision 700 to (among other things) manage the funding of the military services’ readiness accounts and to move some overseas contingency operations funding into the base defense budget to support the transition of depot maintenance requirements from overseas contingency operations to the base defense budget. To facilitate the implementation of this guidance within the department, Resource Management Decision 700 outlines several actions for organizations to take, including providing annual reset updates to the Office of the Secretary of Defense, Cost Analysis and Program Evaluation that incorporate an assessment of the multiyear reset liability based on plans for equipment retrograde. Retrograde is a process that includes the movement of equipment and materiel from one theater of operations to a repair facility for reset, or to another theater of operations to replenish unit stocks or satisfy stock requirements. Equipment is redistributed in accordance with theater priorities to meet mission requirements within areas of responsibility and the DOD requirements worldwide. For example, in response to the February 27, 2009, drawdown order for Iraq and surge of forces in Afghanistan in August 2010, the Army began retrograding some equipment out of Iraq to the U.S. for reset and transferring other equipment to support units deploying to Afghanistan. The initial phase of the retrograde process begins when units coordinate, through their normal chain-of-command in theater of operations, to obtain disposition instructions for all theater-provided equipment that is no longer needed by the current unit or follow-on units. For example, in Iraq, units coordinated with Multi-National Forces in Iraq, Coalition Forces Land Component Command, and U.S. Army Central Command. The U.S. Army Central Command managers then conducted a vetting process to determine if the equipment can fill other theater requirements such as prepositioned stocks or unit requirements in Afghanistan. If the equipment did not meet these requirements, U.S. Army Central Command sent the equipment to Kuwait for processing as theater-excess equipment expected to return to the U.S. for reset. Also, some equipment is included on the Army’s Automatic Reset Induction (ARI) list, which is comprised of unit equipment that automatically returns to the U.S. for depot-level reset. U.S. Army Forces Command and Army Materiel Command place equipment on the ARI list because of expected extensive wear and tear experienced in theater that requires refurbishment or rebuilding, and not to address equipping requirements. Army officials said that the Reset Task Force inspects non-ARI equipment to determine the level of reset it will require. Once the inspection is complete, the equipment is shipped back to the U.S. with disposition instructions for reset or for automatic reset induction. Figure 2 illustrates the retrograde process for equipment leaving Southwest Asia and returning to the United States for reset repairs. In 2010, the Army transferred over 43,000 thousand pieces of equipment—such as tactical wheeled vehicles, communications, and other equipment—from Kuwait to Afghanistan to support OEF. From 2010 through 2011, the Army retrograded over 29,000 thousand pieces of rolling stock,vehicles, from Southwest Asia to the U.S. for reset. Since our last review, the Army has taken steps intended to better integrate and prioritize its retrograde, reset, and redistribution efforts. In our 2007 report, we noted that the Army’s reset implementation strategy did not specifically target shortages of equipment on hand among units preparing for deployment to Iraq and Afghanistan in order to mitigate operational risk. At that time the Army’s Force Generationimplementation strategy and reset implementation guidance provided that, the primary goal of reset is to prepare units for deployment and to improve next-to-deploy units’ equipment-on-hand levels. We noted at that time, however, that the Army’s current reset planning process was based on resetting equipment that it expected would be returning to the United States in a given fiscal year, and not based on aggregate equipment requirements to improve the equipment-on-hand level of deploying units. Therefore, we concluded the Army could not be assured that its reset programs would provide sufficient equipment to train and equip deploying units for ongoing and future requirements. We recommended that the Secretary of Defense direct the Secretary of the Army to assess the Army’s approaches to equipment reset to ensure that its priorities address equipment shortages in the near term to minimize operational risk and ensure that the needs of deploying units could be met. However, DOD did not agree with our recommendations at the time, stating that it believed the Army’s overall equipping strategy was sufficient to equip units that were deployed or deploying. Although DOD disagreed with our recommendations in 2007, in the years since our review, the Army has taken steps to address its reset efforts in targeting equipment shortages. For example, in April 2008, the Army issued its Depot Maintenance Enterprise Strategic Plan noting that filling materiel shortages within warfighting units is a key challenge facing the depot maintenance enterprise, and called for changes in processes, programs, and policies to ensure the timely repair of equipment to address these shortages. The plan also noted the challenge of linking the equipment needs of the Army through the Army Force Generation model using current depot maintenance production capabilities. Specifically, it called for updates to policies and regulations governing depot maintenance priorities, including revisions to Army regulation AR 750-1, the Army Materiel Maintenance Policy, and the establishment of processes resulting in depot production to support high priority unit equipment needs. At the time of our review, the Army’s revisions to AR 750-1, intended to enable the depot maintenance program to support the Army Force Generation readiness model, were in final review. In 2010, the Army, recognizing that retrograde operations are essential to facilitating depot level reset and redistribution of equipment, developed the retrograde, reset, and redistribution (R3) initiative to synchronize retrograde, national depot-level reset efforts, and redistribution efforts. The R3 initiative was developed by the Office of the Deputy Chief of Staff, Programs, Directorate of Force Development and several other key Army commands to facilitate the rapid return of equipment from theater and to increase equipment on hand for units. In March 2011, an initial R3 equipment priority list was issued, based primarily on shortages identified by U.S Army Forces Command. According to Army officials, this initial list was revised and reissued at the end of fiscal year 2011 to include critical equipment shortages identified and fully endorsed by all Army commands. According to officials, the Army is now using the R3 list to prioritize the retrograde and reset of about 19,000 items of rolling stock from Kuwait as of February 2012. Officials indicated that the Army plans to return about half of these items to the U.S. by the end of March 2012 to begin the reset process. Officials with the Army’s Office of the Deputy Chief of Staff, Programs, Directorate of Force Development said that the R3 equipment list is a consensus among Army organizations on rank order priority needs and provides Army leadership with timely and accurate information to make strategic resourcing decisions to equip units for future missions. They believe the R3 equipment list will benefit the Army in making key decisions to address equipping and resourcing issues for units deploying and training as part of the Army’s reset planning process. The Army plans to monitor the effectiveness of the R3 initiative to better link reset funding and execution to the Army’s equipping priorities. Because it had not begun to fully implement the initiative until this year, the Army does not expect to have sufficient data to gauge the effectiveness of the R3 initiative until the fourth quarter of fiscal year 2012. As the Army continues to encounter equipment shortages and faces the prospect of future fiscal constraints and limited budgets, as well as uncertainties concerning the amount of equipment expected to return from theater in the near term, the need to manage and prioritize reset depot workload consistent with unit equipment needs remains critical. The Army has previously noted that the challenge with reset is linking depot maintenance capabilities with its retrograde and redistribution efforts to meet the needs of the operational Army as it goes through the Army Force Generation process. We believe full implementation of the R3 initiative would be a step in the right direction. However, it is too early to tell whether this initiative will provide a consistent and transparent process for addressing the Army’s equipping needs, or future needs that may continue beyond the end of current operations. The Army has taken steps under its own initiative to report its reset execution quantities to Congress since 2007, but this reporting does not capture important elements of the Army’s reset efforts, including its estimated future reset costs and the amount of equipment planned for reset each year that is successfully reset. Specifically, the monthly reports identify the Army’s cumulative progress in terms of the number of items reset in the current fiscal year to date, the number of brigades that have undergone reset, and the number of new items procured as replacement for battle-loss or damaged items. However, none of these measures indicate the status of the Army’s future reset liability, which is the total repair cost being incurred through ongoing and expected deployments. Nor do the reports capture differences between the equipment the Army resets during the year and the equipment it had initially planned to reset. As a result, Congress does not have visibility over the Army’s progress in addressing reset and expected total reset costs. We have reported that agencies and decision makers need visibility into the accuracy of program execution in order to ensure basic accountability and to anticipate future costs and claims on the budget. In addition, programs should institute internal controls that facilitate effective financial reporting for internal and external users. Various congressional committees have expressed concern about improving accountability and oversight of reset funding, the lack of information to support accurate planning for reset, and whether the Army is managing reset in a manner commensurate with its equipment needs and budgetary requirements. The Army has generally reported that its reset requirements may continue for two to three years beyond the end of conflict, but has not included estimated future reset costs in its reports to Congress. The Office of the Secretary of Defense, Cost Analysis and Program Evaluation has developed and tracks for each of the services a cost factor—the multiyear reset liability—that estimates the military services’ future reset costs. The multiyear reset liability is the amount of money that a service would need to restore all equipment used in theater to its original, pre-conflict state over several fiscal years. This includes the cost to reset all equipment currently in theater, as well as all equipment that has returned from theater and not yet been reset. In 2010, the Cost Analysis and Program Evaluation analysis estimated the Army’s multiyear reset liability at that time was $24 billion, and it plans to revise this figure in the summer of 2012. As the Army successfully completes certain reset actions, its overall reset liability can decrease. Further, some actions, such as additional deployments, buildups of equipment in theater, or an increased pace of operations can increase the multiyear reset liability. We believe the multiyear reset liability is a useful estimate because it provides a cost benchmark against which progress can be measured. However, the Army’s monthly reset execution reports currently do not provide future reset liability cost estimates to Congress. Rather, as discussed below, the reports describe the cumulative progress being made against that fiscal year’s requirement according to the number of items that the Army has reset in a given month. The Army’s monthly congressional reports on reset do not provide visibility over the impact of changes in reset execution on multiyear reset liability because they do not distinguish between planned and unplanned reset and provide only aggregate totals for broad equipment categories. Specifically, the Army’s monthly reports to Congress currently provide information on reset activity, such as the number of items scheduled to be reset in the current fiscal year, the number of items scheduled for reset in the prior fiscal year that were not executed (“carry-in”), and the number of items still undergoing reset (“work in progress”). The monthly reports also include the number of items completed, and the percent complete— number completed compared to total requirement. Table 1 provides an example of what the Army reports to Congress each month, based on a report provided in fiscal year 2012. As table 1 shows, the Army reports aggregate information on reset activity in broad categories, such as Tactical Wheeled Vehicles or Aviation Support Equipment. However, for two reasons, the data do not show the true picture of the Army’s progress in executing its reset plan. First, the data do not distinguish between the planned items for reset— the funding for which items was programmed by the Army and included in the Army’s budget justification materials to Congress—and the unplanned items repaired through reset. Rather, the figures shown as “completed” include both planned and unplanned items. To illustrate this point, our analysis of Army data from fiscal year 2010 shows that 4,144 tactical wheeled vehicles were planned for reset in fiscal year 2010 and a total of 3,563 vehicles were executed (see table 2). According to the Army’s current reporting method, this would result in a reported total completion rate of 86 percent. However, our analysis showed that, of the total number of items executed, 1,647 items or approximately 40 percent of the equipment reset was actually equipment that had been planned and programmed. More than half of the tactical wheeled vehicles reset—1,916—were items that had not been planned for reset. According to Army documents, the reset of unplanned items is due primarily to changes in, among other things, the mix and condition of equipment returning to home stations and unforeseen changes to troop commitments in theater. For example, DOD documents show that in fiscal year 2010, reset requirements were affected by the expansion of forces in Afghanistan. This force expansion also required additional equipment, which the Army supplied in part by shipping equipment that had been planned for retrograde from Iraq—and eventual reset in the United States—to Afghanistan instead. While we acknowledge such challenges, the Army’s current reporting of reset execution does not permit Congress to see when deviations between planning and execution occur. Second, by reporting in broad aggregate equipment categories, the Army’s reports do not give Congress visibility over reset activity for individual types of equipment. In some cases, our analysis shows that, while the overall completion percentage may be high, the picture can be significantly different when looking at individual items. For example, as discussed above, the total number of items executed during fiscal year 2010 was 86 percent of the total planned reset for the aggregate category of tactical wheeled vehicles. However, this number alone can obscure important information on the pace of reset for individual types of vehicles within the aggregate category. Table 3 offers a breakdown of the items reset in the Tactical Wheeled Vehicle category for fiscal year 2010. As table 3 shows, the actual reset activity for items labeled as “other automotive” was significantly more than planned—1,641 compared to 667, whereas the reset activity for high mobility multipurpose wheeled vehicles was significantly less than planned—895 compared to 1,966. Therefore, reporting the overall completion percentage for the category without information on the status of vehicle types does not provide transparency into the Army’s progress on its total reset efforts. This information is important because it has cost implications. Specifically, while items may fall into the same category, the cost to reset can vary broadly depending on the vehicle type. For example, both the M1200 Knight (an armored security vehicle) and the M1151 HMMWV are categorized as Tactical Wheeled Vehicles in the Army’s monthly reports to Congress. For planning purposes, in 2010 the Army requested over $500,000 for the repair of each M1200, while requesting about $154,000 for the repair of each M1151. However, in 2010 more M1200s were repaired than planned, thus accounting for a larger share of the budgeted reset funds. At the same time, with fewer funds remaining, some equipment planned and budgeted for repair was not reset, pushing that workload to future fiscal years. Conversely, if fewer M1200s had been reset than were planned, the $500,000 estimated reset liability for each M1200 would be incurred in a future fiscal year, as they would still require reset eventually. In either case, the Army would record the actions taken within the numbers shown for the reset of Tactical Wheeled Vehicles, but the cost impact of these two scenarios will be different given the difference in estimated costs for the two items. Therefore, understanding how many items of each vehicle type have been reset is important to understanding the implications of changes in reset execution for the Army’s multiyear reset liability. Without information on the multiyear reset liability and additional details within current reports, Congress may not have a complete picture of both the Army’s progress in meeting its reset plan as well as the long-term cost implications of reset. The Army needs to balance multiple factors that make reset planning and execution a complicated and challenging process. Efficient reset planning must identify the type of equipment that needs to be retrograded from theater, prioritized through the depots, and redistributed to units based on immediate equipment needs. Since our 2007 review, the Army has taken steps to incorporate deploying units’ equipment needs into their reset planning, including the implementation of the R3 equipment list, but it is too early to tell whether this initiative will provide a consistent and transparent process. Further, decision makers in the Army and Congress could benefit from greater visibility into reset program execution in order to ensure accountability, improve planning, and anticipate future costs and claims on the budget. The Army has taken positive steps towards providing this visibility by issuing reports on its reset execution to Congress on a monthly basis. However, these monthly reports currently lack key information that could illustrate the Army’s overall effectiveness at managing reset long-term, including information by vehicle type. With more complete information on the Army’s total reset efforts, Congress will be able to exercise oversight and determine if the amount of funding appropriated for equipment reset is being used for the planned equipment—in the short term—and to monitor the Army’s progress in addressing its multiyear reset liability. To improve accountability and oversight, Congress should consider directing the Secretary of the Army to include status information on the percentage of equipment reset according to the initial reset plan by vehicle type in its monthly reports to Congress. To ensure that the Army provides information to Congress that is useful for assessing its short and long-term reset progress, we recommend that the Secretary of the Army direct the Office of the Chief of Staff of the Army, Logistics to take the following two actions: Revise the monthly congressional reset reports to include the Army’s multiyear reset liability, which should include the anticipated cost to reset all equipment in-theater as well as all equipment returned to the United States that has not yet been reset; and Revise the monthly congressional reset reports to include information on the percentage of equipment reset according to the initial reset plan by vehicle type. In written comments on a draft of this report, DOD did not concur with our two recommendations. Although DOD disagreed with our recommendation to revise the monthly congressional reset reports to include the Army’s multi-year reset liability, it cited actions it plans to take that would meet the intent of our recommendation. DOD also disagreed with our recommendation to include reset information by vehicle type in its monthly reset reports to Congress. We continue to believe that this information is important to provide adequate visibility to Congress over reset and thus are including a matter for congressional consideration. DOD’s comments appear in their entirety in appendix II. DOD also provided technical comments that we incorporated as appropriate. In disagreeing with our first recommendation for the Army to include its multi-year reset liability in the monthly congressional reset reports, DOD stated that the Army’s monthly reset report was intended to show the status of equipment reset activities in the year of execution. According to DOD, the Army does not plan to include the estimate of future reset liability projections in every monthly report because developing those estimates includes the projection of future deployed force levels as well as major force redeployment timelines, which are factors that do not significantly change on a month-to-month basis. However, DOD stated that the Army plans to include the Army’s estimate of future equipment reset liability in its summary report to Congress for the fiscal year. We believe the Army’s plan to report future equipment reset liabilities in its summary report for each fiscal year would meet the intent of our recommendation. DOD also disagreed with our second recommendation that the Army include in its monthly congressional reset reports status information on the percentage of equipment reset by vehicle type. DOD stated that the Army intends to provide more detailed information on reset program adjustments in those reports, but noted that the Army does not recommend doing so by vehicle type. Specifically, DOD stated that actual monthly equipment reset production rates are extremely dynamic and adjustments in the depots are made daily based on a number of factors. Further, DOD stated that adjustments are common across all of the nearly 800 systems that proceed through the depots for reset each year and are best summarized by the most major changes among large categories. The department further stated that current vehicle categories in the monthly reports are adequate for this purpose, but indicated that additional explanation of major variances between planned, newly planned and executed equipment reset would be included in future reports. However, as we reported, the broad categories do not fully capture deviations between planned and executed reset by vehicle type, and the Army did not explain what information it will include in these additional explanations. Therefore, we remain concerned that the changes in reset reporting suggested by the Army would not provide adequate visibility to Congress over planned and executed equipment reset. Consequently, we have added a matter for congressional consideration suggesting that Congress consider directing the Army to include status information on the percentage of equipment reset according to the initial reset plan by vehicle type in its monthly reports to Congress. We are sending copies of this report to interested congressional committees, the Secretary of Defense and the Secretary of the Army. This report will be available at no charge on GAO’s website, http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (404) 679-1808 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this letter. GAO staff who made key contributions are listed in appendix III. To examine any steps the Army has taken to improve its equipment reset strategy and address target shortages since our 2007 report, we reviewed the Department of Defense’s (DOD) comments in that report. We also reviewed Army guidance explaining the definition of reset and how it is employed to restore equipment for units to pre-deployment levels. We reviewed the Army Force Generation regulation to determine the criteria to establish Army policy to institutionalize the Army Force Generation model, which supports strategic planning, equipment prioritization, and other resources to generate trained and ready forces, and the role of reset in supporting the model to repair equipment for units to meet future missions. We obtained and reviewed the Army Reset Execution Order, which provides guidance to the Army on reset operations. We obtained written responses on our inquiries from Army officials and conducted interviews to discuss the execution order and their interpretation of the roles, responsibilities, and activities required to execute the reset of equipment returning from overseas to the United States. We reviewed and analyzed reset documents associated with the execution order, which contained information on the Army’s annual sustainment-level reset workload requirements estimates. We obtained written responses on our inquiry from Army officials and conducted interviews to discuss and understand the methodology used to develop those estimates and the equipment mix and quantities expected to return from Southwest Asia to the United States for reset for the current fiscal year. We reviewed and analyzed the Army’s equipment retrograde priority lists identifying equipment needed to be returned to the United States for reset and reviewed guidance on the retrograde of equipment to understand the methodology used to develop the list. We analyzed the relationship between the sustainment-level reset workload requirements estimates worksheet and retrograde priority list to determine the similarities and differences in the type and mix of equipment identified for depot-level reset. We discussed these similarities and differences to understand how they affect the Army’s ability to identify and reset the right equipment to support both deploying and training units. We held several discussions with Army officials to learn about the retrograde, reset, and redistribution (R3) initiative and how they expect this initiative might improve equipment-reset processes to better align reset efforts with unit equipment needs. We interviewed officials in the Office of the Secretary of Defense for Logistics and Materiel Readiness to obtain information about DOD’s guidance on reset. To determine the extent to which the Army’s monthly reset reports to Congress provide visibility over reset costs and execution, we obtained data published in the Reset Execution Order on the Army’s annual sustainment-level reset workload requirements estimates from fiscal years 2007 through 2012 to determine the quantities of equipment planned for reset. We obtained reset execution data generated by the Army Materiel Command and Army Logistics Management Program System from fiscal years 2007 through 2010 to determine the actual amount of equipment reset in support of contingency operations. We provided questions, received written responses, and interviewed Army officials to understand the reset planning and execution process, and reporting requirements to Congress on both planned and actual reset data and budgets. We focused our analysis on the reset of Army rolling stock, which was heavily rotated in and out of Southwest Asia to support Operation Iraqi Freedom because it accounts for the majority of the Army’s depot reset funding. We compared the reset workload requirements estimates to the reset execution data, using the National Stock Number, to determine whether the data were accurate, comparable, and consistent for our review purposes. In addition, we collected and reviewed documents and data on historical and current budget execution for reset to determine the consistency between annual reset requirements and budget requests. We performed a data reliability assessment of the information systems containing the execution data and determined that the date were sufficiently reliable for the purpose of this engagement. We provided questions, received written responses, and interviewed Army officials to clarify how budget data were used and to ensure that we had a good understanding of how to interpret the data for our purposes. We also discussed with Army officials the process for tracking and reconciling reset expenditures with quantities of equipment based on planned equipment requirements. Further, we obtained and reviewed historical and current monthly Supplemental Cost of War Execution Reports on Army reset expenditures and funding requests submitted to Congress, and the Army’s monthly congressional reports on the quantity of equipment repaired through reset to determine the type of information reported on reset costs and the equipment quantities repaired at the depots. We have previously reported on problems relating to the reliability of data generated from the Army’s Logistics Management Program, but have not specifically reviewed the reliability of the reset depot execution data. To address each of our objectives, we also spoke with officials, and obtained documentation when applicable, at the following locations: Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, Assistant Secretary of Defense for Logistics and Materiel Readiness, Deputy Assistant Secretary of Defense for Maintenance, Policy, and Programs Office of the Secretary of Defense for Cost Assessment and Program Office of the Under Secretary of Defense (Comptroller) Headquarters Department of the Army; Office of the Deputy Chief of Staff, G-4 Logistics; Office of the Deputy Chief of Staff, G-8, (Programs), Directorate of Force Development; Office of the Deputy Chief of Staff, G-3/5/7 Strategy, Plans, and Policy; and Army Budget Office U.S. Army Central Command U.S. Army Materiel Command U.S. Army Forces Command U.S. Army Sustainment Command TACOM Life Cycle Management Command We conducted this performance audit between January 2010 and May 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Cary B. Russell, (404) 679-1808 or russellc@gao.gov. In addition to the contact named above, William M. Solis, Director (Retired); Larry Junek, Assistant Director; James Lackey; Latrealle Lee; Oscar Mardis; Cynthia Saunders; John Van Schaik; Amie Steele; Michael Willems; Monique Williams; Erik Wilkins-McKee; and Gregory Pugnetti made key contributions to this report.
From 2007 to 2012, the Army received about $42 billion to fund its expenses for the reset of equipment—including more than $21 billion for depot maintenance—in support of continuing overseas contingency operations in Southwest Asia. Reset is intended to mitigate the effects of combat stress on equipment by repairing, rebuilding, upgrading, or procuring replacement equipment. Reset equipment is used to supply non-deployed units and units preparing for deployment while meeting ongoing operational requirements. In 2007, GAO reported that the Army’s reset strategy did not target equipment shortages for units deploying to theater. For this report, GAO (1) examined steps the Army has taken to improve its equipment reset strategy since 2007, and (2) determined the extent to which the Army’s reset reports to Congress provide visibility over reset costs and execution. To conduct this review, GAO reviewed and analyzed DOD and Army documentation on equipment reset strategies and monthly Army reports to Congress, and interviewed DOD and Army officials. Since GAO’s 2007 review, the Army has taken steps to improve its use of reset in targeting equipment shortages. In 2007, GAO noted that the Army’s reset implementation strategy did not specifically target shortages of equipment on hand among units preparing for deployment to Iraq and Afghanistan in order to mitigate operational risk. GAO recommended that the Army act to ensure that its reset priorities address equipment shortages in the near term to ensure that the needs of deploying units could be met. The Department of Defense (DOD) did not concur, and stated that there was no need to reassess its approaches to equipment reset. However, in 2008, the Army issued its Depot Maintenance Enterprise Strategic Plan, noted that filling materiel shortages within warfighting units is a key challenge facing the depot maintenance enterprise, and called for changes in programs and policies to address materiel shortages within warfighting units. Further, recognizing that retrograde operations—the return of equipment from theater to the United States—are essential to facilitating depot level reset and redistribution of equipment, the Army in 2010 developed the retrograde, reset, and redistribution (R3) initiative to synchronize retrograde, national depot-level reset efforts, and redistribution efforts. In March 2011, the Army issued an R3 equipment priority list, and revised and reissued an updated list at the end of fiscal year 2011 with full endorsement from all Army commands. The R3 initiative has only begun to be fully implemented this year, and thus it is too early to tell whether it will provide a consistent and transparent process for addressing the Army’s current or future equipping needs. GAO found that the Army’s monthly reports to Congress do not include expected future reset costs or distinguish between planned and unplanned reset of equipment. GAO has reported that agencies and decision makers need visibility into the accuracy of program execution in order to ensure basic accountability and to anticipate future costs. However, the Army does not include its future reset liability in its reports to Congress, which DOD most recently estimated in 2010 to be $24 billion. Also, the Army reports to Congress include the number of items that it has repaired in a given month using broad categories, such as Tactical Wheeled Vehicles, which may obscure progress on equipment planned for reset. For example, GAO’s analysis of Army data showed that 4,144 tactical wheeled vehicles were planned for reset in fiscal year 2010, while 3,563 vehicles were executed. According to the Army’s current reporting method, this would result in a reported completion rate of 86 percent, but GAO’s analysis showed that only approximately 40 percent of the equipment that was reset had been planned and programmed. This reporting method may also restrict visibility over the Army’s multiyear reset liability. For example, both the M1200 Knight and the M1151 HMMWV are categorized as Tactical Wheeled Vehicles, but anticipated reset costs for the M1200 are significantly higher. In 2010 more M1200s were repaired than planned, thus accounting for a larger share of the budgeted reset funds. With fewer funds remaining, some equipment planned and budgeted for repair was not reset, pushing that workload to future fiscal years. These differences are not captured in the Army’s monthly reports, and thus Congress may not have a complete picture of the Army’s short- and long-term progress in addressing reset. GAO recommends that the Army revise its monthly congressional reset reports to include its future reset liability and status information on equipment reset according to the initial reset plan by vehicle type. DOD did not concur. DOD stated that the Army would report its reset liability annually instead of monthly. Because DOD did not agree to report its reset status by vehicle type, GAO included a matter for congressional consideration to direct the Army to report this information.
Information technology should enable government to better serve the American people. However, despite spending hundreds of billions on IT since 2000, the federal government has experienced failed IT projects and has achieved little of the productivity improvements that private industry has realized from IT. Too often, federal IT projects run over budget, behind schedule, or fail to deliver results. In combating this problem, proper oversight is critical. Both OMB and federal agencies have key roles and responsibilities for overseeing IT investment management and OMB is responsible for working with agencies to ensure investments are appropriately planned and justified. However, as we have described in numerous reports, although a variety of best practices exist to guide their successful acquisition, federal IT projects too frequently incur cost overruns and schedule slippages while contributing little to mission-related outcomes. Agencies have reported that poor-performing projects have often used a “big bang” approach—that is, projects that are broadly scoped and aim to deliver capability several years after initiation. For example, in 2009 the Defense Science Board reported that the Department of Defense’s (Defense) acquisition process for IT systems was too long, ineffective, and did not accommodate the rapid evolution of IT. The board reported that the average time to deliver an initial program capability for a major IT system acquisition at Defense was over 7 years. Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT projects and how these funds are to be allocated. As reported to OMB, federal agencies plan to spend more than $82 billion on IT investments in fiscal year 2014, which is the total expended for not only acquiring such investments, but also the funding to operate and maintain them. Of the reported amount, 27 federal agencies plan to spend about $75 billion, $17 billion on development and acquisition and $58 billion on operations and maintenance (O&M). Figure 1 shows the percentages of total planned spending for 2014 for the $75 billion spent on development and O&M. However, this $75 billion does not reflect the spending of the entire federal government. We have previously reported that OMB’s figure understates the total amount spent in IT investments. Specifically, it does not include IT investments by 58 independent executive branch agencies, including the Central Intelligence Agency, or by the legislative or judicial branches. Further, agencies differed on what they considered an IT investment; for example, some have considered research and development systems as IT investments, while others have not. As a result, not all IT investments are included in the federal government’s estimate of annual IT spending. OMB provided guidance to agencies on how to report on their IT investments, but this guidance did not ensure complete reporting or facilitate the identification of duplicative investments. Consequently, we recommended, among other things, that OMB improve its guidance to agencies on identifying and categorizing IT investments. Further, over the past several years, we have reported that overlap and fragmentation among government programs or activities could be harbingers of unnecessary duplication. Thus, the reduction or elimination of duplication, overlap, or fragmentation could potentially save billions of tax dollars annually and help agencies provide more efficient and effective services. OMB has implemented a series of initiatives to improve the oversight of underperforming investments, more effectively manage IT, and address duplicative investments. These efforts include the following: IT Dashboard. Given the importance of transparency, oversight, and management of the government’s IT investments, in June 2009, OMB established a public website, referred to as the IT Dashboard, that provides detailed information on 760 major IT investments at 27 federal agencies, including ratings of their performance against cost and schedule targets. The public dissemination of this information is intended to allow OMB; other oversight bodies, including Congress; and the general public to hold agencies accountable for results and performance. Among other things, agencies are to submit Chief Information Officer (CIO) ratings, which, according to OMB’s instructions, should reflect the level of risk facing an investment on a scale from 1 (high risk) to 5 (low risk) relative to that investment’s ability to accomplish its goals. Ultimately, CIO ratings are assigned colors for presentation on the Dashboard, according to the five-point rating scale, as illustrated in table 1. As of April 2014, according to the IT Dashboard, 201 of the federal government’s 760 major IT investments—totaling $12.4 billion—were in need of management attention (rated “yellow” to indicate the need for attention or “red” to indicate significant concerns). (See fig. 2.) TechStat reviews. In January 2010, the Federal CIO began leading TechStat sessions—face-to-face meetings to terminate or turnaround IT investments that are failing or are not producing results. These meetings involve OMB and agency leadership and are intended to increase accountability and transparency and improve performance. Subsequently, OMB empowered agency CIOs to hold their own TechStat sessions within their respective agencies. According to the former Federal CIO, the efforts of OMB and federal agencies to improve management and oversight of IT investments have resulted in almost $4 billion in savings. Federal Data Center Consolidation Initiative. Concerned about the growing number of federal data centers, in February 2010 the Federal CIO established the Federal Data Center Consolidation Initiative. This initiative’s four high-level goals are to promote the use of “green IT” by reducing the overall energy and real estate footprint of government data centers; reduce the cost of data center hardware, software, and operations; increase the overall IT security posture of the government; and shift IT investments to more efficient computing platforms and technologies. OMB believes that this initiative has the potential to provide about $3 billion in savings by the end of 2015. PortfolioStat. In order to eliminate duplication, move to shared services, and improve portfolio management processes, in March 2012, OMB launched the PortfolioStat initiative. Specifically, PortfolioStat requires agencies to conduct an annual agency-wide IT portfolio review to, among other things, reduce commodity IT spending and demonstrate how their IT investments align with the agency’s mission and business functions. PortfolioStat is designed to assist agencies in assessing the current maturity of their IT investment management process, making decisions on eliminating duplicative investments, and moving to shared solutions in order to maximize the return on IT investments across the portfolio. OMB believes that the PortfolioStat effort has the potential to save the government $2.5 billion over the next 3 years by, for example, consolidating duplicative systems. Given the magnitude of the federal government’s annual IT budget, which is expected to be more than $82 billion in fiscal year 2014, it is important that agencies leverage all available opportunities to ensure that their IT investments are acquired in the most effective manner possible. To do so, agencies can rely on IT acquisition best practices and initiatives such as OMB’s IT Dashboard, and OMB-mandated TechStat sessions. Additionally, agencies can save billions of dollars by continuing to consolidate federal data centers and by eliminating duplicative investments through OMB’s PortfolioStat initiative. In 2011, we identified seven successful acquisitions and nine common factors critical to their success, and noted that (1) the factors support OMB’s objective of improving the management of large-scale IT acquisitions across the federal government, and (2) wide dissemination of these factors could complement OMB’s efforts. Specifically, we reported that federal agency officials identified seven successful acquisitions, in that they best achieved their respective cost, schedule, scope, and performance goals. Notably, all of these were smaller increments, phases, or releases of larger projects. The common factors critical to the success of three or more of the seven acquisitions are generally consistent with those developed by private industry and are identified in table 2. These critical factors support OMB’s objective of improving the management of large-scale IT acquisitions across the federal government; wide dissemination of these factors could complement OMB’s efforts. The IT Dashboard serves an important role in allowing OMB and other oversight bodies to hold agencies accountable for results and performance. However, we have issued a series of reports highlighting deficiencies with the accuracy and reliability of the data reported on the Dashboard. For example, we reported in October 2012 that Defense had not rated any of its investments as either high or moderately high risk and that in selected cases, these ratings did not appropriately reflect significant cost, schedule, and performance issues reported by GAO and others. We recommended that Defense ensure that its CIO ratings reflect available investment performance assessments and its risk management guidance. Defense concurred and has revised its process to address these concerns. Further, while we reported in 2011 that the accuracy of Dashboard cost and schedule data had improved over time, more recently, in December 2013 we found that agencies had removed investments from the Dashboard by reclassifying their investments—representing a troubling trend toward decreased transparency and accountability. Specifically, the Department of Energy reclassified several of its supercomputer investments from IT to facilities and the Department of Commerce decided to reclassify its satellite ground system investments. Additionally, as of December 2013, the public version of the Dashboard was not updated for 15 of the previous 24 months because OMB does not revise it as the President’s budget request is being created. We also found that, while agencies experienced several issues with reporting the risk of their investments, such as technical problems and delayed updates to the Dashboard, the CIO ratings were mostly or completely consistent with investment risk at seven of the eight selected agencies. Additionally, the agencies had already addressed several of the discrepancies that we identified. The final agency, the Department of Veterans Affairs, did not update 7 of its 10 selected investments because it elected to build, rather than buy, the ability to automatically update the Dashboard, and has now resumed updating all investments. To their credit, agencies’ continued attention to reporting the risk of their major IT investments supports the Dashboard’s goal of providing transparency and oversight of federal IT investments. Nevertheless, the rating issues that we identified with performance reporting and annual baselining, some of which are now corrected, serve to highlight the need for agencies’ continued attention to the timeliness and accuracy of submitted information, in order to allow the Dashboard to continue to fulfill its stated purpose. We recommended that agencies appropriately categorize IT investments and that OMB make Dashboard information available independent of the budget process. OMB neither agreed nor disagreed with these recommendations. Six agencies generally agreed with the report or had no comments and two others did not agree, believing their categorizations were appropriate. We continue to believe that our recommendations are valid. TechStat reviews were initiated by OMB to enable the federal government to turnaround, halt, or terminate IT projects that are failing or are not producing results. In 2013, we reported that OMB and selected agencies had held multiple TechStats, but that additional OMB oversight was needed to ensure that these meetings were having the appropriate impact on underperforming projects and that resulting cost savings were valid. Specifically, we determined that as of April 2013, OMB reported conducting 79 TechStats, which focused on 55 investments at 23 federal agencies. Further, 4 selected agencies—the Departments of Agriculture, Commerce, HHS, and DHS—conducted 37 TechStats covering 28 investments. About 70 percent of the OMB-led and 76 percent of agency- led TechStats on major investments were considered medium to high risk at the time of the TechStat. However, the number of at-risk TechStats held was relatively small compared to the current number of medium- and high-risk major IT investments. Specifically, the OMB-led TechStats represented roughly 18.5 percent of the investments across the government that had a medium- or high-risk CIO rating. For the 4 selected agencies, the number of TechStats represented about 33 percent of the investments that have a medium- or high-risk CIO rating. We concluded that until OMB and agencies develop plans to address these weaknesses, the investments would likely remain at risk. In addition, we reported that OMB and selected agencies had tracked and reported positive results from TechStats, with most resulting in improved governance. Agencies also reported projects with accelerated delivery, reduced scope, or termination. We also found that OMB reported in 2011 that federal agencies achieved almost $4 billion in life-cycle cost savings as a result of TechStat sessions. However, we were unable to validate OMB’s reported results because OMB did not provide artifacts showing that it ensured the results were valid. Among other things, we recommended that OMB require agencies to report on how they validated the outcomes. OMB generally agreed with this recommendation. In an effort to consolidate the growing number of federal data centers, in 2010, OMB launched a consolidation initiative intended to close 40 percent of government data centers by 2015, and, in doing so, save $3 billion. Since 2011, we have issued a series of reports on the efforts of agencies to consolidate their data centers. For example, in July 2011 and July 2012, we found that agencies had developed plans to consolidate data centers; however, these plans were incomplete and did not include best practices. In addition, although we reported that agencies had made progress on their data center closures, OMB had not determined initiative-wide cost savings, and oversight of the initiative was not being performed in all key areas. Among other things, we recommended that OMB track and report on key performance measures, such as cost savings to date, and improve the execution of important oversight responsibilities, and that agencies complete inventories and plans. OMB agreed with these two recommendations, and most agencies agreed with our recommendations to them. Additionally, as part of ongoing follow-up work, we have determined that while agencies had closed data centers, the number of federal data centers was significantly higher than previously estimated by OMB. Specifically, as of May 2013, agencies had reported closing 484 data centers by the end of April 2013, and were planning to close an additional 571 data centers—for a total of 1,055—by September 2014. However, as of July 2013, 22 of the 24 agencies participating in the initiative had collectively reported 6,836 data centers in their inventories— approximately 3,700 data centers more than OMB’s previous estimate from December 2011. This dramatic increase in the count of data centers highlights the need for continued oversight of agencies’ consolidation efforts. OMB launched the PortfolioStat initiative in March 2012, which required 26 executive agencies to, among other things, reduce commodity IT spending and demonstrate how their IT investments align with the agency’s mission and business functions. In November 2013, we reported on agencies efforts to complete key required PortfolioStat actions and make portfolio improvements. We noted that all 26 agencies that were required to implement the PortfolioStat initiative took actions to address OMB’s requirements. However, there were shortcomings in their implementation of selected requirements, such as addressing all required elements of an action plan to consolidate commodity IT, and migrating two commodity areas to a shared service by the end of 2012. In addition, several agencies had weaknesses in selected areas such as the CIO’s authority to review and approve the entire portfolio, and ensuring a complete baseline of information relative to commodity IT. Further, we observed that OMB’s estimate of about 100 consolidation opportunities and a potential $2.5 billion in savings from the PortfolioStat initiative was understated because, among other things, it did not include estimates from Defense and the Department of Justice. Our analysis, which included these estimates, showed that, collectively, the 26 agencies reported about 200 opportunities and at least $5.8 billion in potential savings through fiscal year 2015, at least $3.3 billion more than the number initially reported by OMB. In March 2013, OMB issued a memorandum commencing the second iteration of its PortfolioStat initiative. This memorandum identified a number of improvements that should help strengthen IT portfolio management and address key issues we have identified. However, we concluded that selected OMB efforts could be strengthened to improve the PortfolioStat initiative and ensure agencies achieve identified cost savings, including addressing issues related to existing CIO authority at federal agencies, and publicly reporting on agency-provided data. We recommended, among other things, that OMB require agencies to fully disclose limitations with respect to CIO authority. In addition, we made several recommendations to improve agencies’ implementation of PortfolioStat requirements. OMB partially agreed with these recommendations, and responses from 20 of the agencies commenting on the report varied. In summary, OMB’s and agencies’ recent efforts have resulted in greater transparency and oversight of federal spending, but continued leadership and attention are necessary to build on the progress that has been made. The expanded use of the common factors critical to the successful management of large-scale IT acquisitions should result in more effective delivery of mission-critical systems. Additionally, federal agencies need to continue to improve the accuracy and availability of information on the Dashboard to provide greater transparency and even more attention to the billions of dollars invested in troubled projects. Further, agencies should conduct additional TechStat reviews to focus management attention on troubled projects and establish clear action items to turn the projects around or terminate them. The federal government can also build on the progress of agencies’ data center closures and reduction in commodity IT. With the possibility of over $5.8 billion in savings from the data center consolidation and PortfolioStat initiatives, agencies should continue to identify consolidation opportunities in both data centers and commodity IT. In addition, better support for the estimates of cost savings associated with the opportunities identified would increase the likelihood that these savings will be achieved. Chairman Udall, Ranking Member Johanns, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at pownerd@gao.gov. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Rebecca Eyler, Kaelin Kuhn, Bradley Roach, Andrew Stavisky, and Kevin Walsh. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government reportedly plans to spend at least $82 billion on IT in fiscal year 2014. Given the scale of such planned outlays and the criticality of many of these systems to the health, economy, and security of the nation, it is important that OMB and federal agencies provide appropriate oversight and transparency into these programs and avoid duplicative investments, whenever possible, to ensure the most efficient use of resources. GAO has previously reported and testified that federal IT projects too frequently fail and incur cost overruns and schedule slippages while contributing little to mission-related outcomes. Numerous best practices and administration initiatives are available for agencies that can help them improve the oversight and management of IT acquisitions. GAO is testifying today on the results and recommendations from selected reports that focused on how best practices and IT reform initiatives can help federal agencies better manage major acquisitions and legacy investments. Information technology (IT) acquisition best practices have been developed by both industry and the federal government to help guide the successful acquisition of investments. For example, GAO recently reported on nine critical factors underlying successful major IT acquisitions. Factors cited included (1) program officials were actively engaged with stakeholders and (2) prioritized requirements. One key IT reform initiative undertaken by the Office of Management and Budget (OMB) to improve transparency is a public website, referred to as the IT Dashboard, which provides information on 760 major investments at 27 federal agencies, totaling almost $41 billion. The Dashboard also includes ratings of investments' risk on a scale from 1 (high risk) to 5 (low risk). As of April 2014, according to the Dashboard, 559 investments were low or moderately low risk (green), 159 were medium risk (yellow), and 42 were moderately high or high risk (red). GAO has issued a series of reports on Dashboard accuracy and, in 2011, found that while there were continued issues with the accuracy and reliability of cost and schedule data, the accuracy of these data had improved over time. Further, a recent GAO report found that selected agencies' ratings were mostly or completely consistent with investment risk. However, this report also noted that agencies had removed major investments from the IT Dashboard, representing a troubling trend toward decreased transparency and accountability. Additionally, GAO reported that as of December 2013, the public version of the Dashboard was not updated for 15 of the previous 24 months because OMB did not revise it as the President's budget request was being created. Consequently, GAO made recommendations to improve the Dashboard's accuracy, ensure that it includes all major IT investments, and increase its availability. Agencies generally agreed with the report or had no comments. In an effort to consolidate the growing number of federal data centers, OMB launched a consolidation initiative intended to close 40 percent of government data centers by 2015, and in doing so, save $3 billion. GAO reported that agencies planned to close 1,055 data centers by the end of fiscal year 2014, but also highlighted the need for continued oversight of these efforts. Among other things, GAO recommended that OMB improve the execution of important oversight responsibilities, with which OMB agreed. To better manage the government's existing IT systems, OMB launched the PortfolioStat initiative, which, among other things, requires agencies to conduct annual reviews of their IT investments and make decisions on eliminating duplication. GAO reported that agencies continued to identify duplicative spending as part of PortfolioStat and that this initiative has the potential to save at least $5.8 billion by fiscal year 2015, but that weaknesses existed in agencies' implementation of the initiative's requirements. Among other things, GAO made several recommendations to improve agencies' implementation of PortfolioStat requirements. OMB partially agreed with these recommendations, and most of the other 20 agencies commenting on the report also agreed. GAO has previously made numerous recommendations to OMB and federal agencies on key aspects of IT acquisition management, as well as the oversight and management of these investments. In particular, GAO has made recommendations regarding the IT Dashboard, efforts to consolidate federal data centers, and PortfolioStat.